logo
How to Improve Website Performance for Better User Experience

How to Improve Website Performance for Better User Experience

In the fast-paced digital world, a website's performance is more critical than ever. Visitors expect websites to load quickly, operate smoothly, and offer a seamless browsing experience. Whether you're running a blog, an e-commerce platform, or a business site built on TYPO3, optimizing your website's performance is essential for retaining visitors and improving your online visibility.
A fast-loading website helps reduce bounce rates and keeps users engaged longer. If a site takes more than three seconds to load, nearly 40% of users will abandon it. Beyond user satisfaction, search engines like Google also use site speed as a ranking factor. So, poor performance doesn't just lose you customers—it can also cost you traffic from search engines.
Several components contribute to overall website performance. Understanding and optimizing these factors can make a significant difference:
Your hosting environment plays a foundational role. Shared hosting might be cost-effective but often results in slower performance due to shared resources. Consider switching to a dedicated server or cloud-based hosting with scalable resources, especially for larger TYPO3 websites.
High-resolution images slow down load times. Use image compression tools like TinyPNG or built-in TYPO3 extensions to reduce file sizes without compromising quality. Implement lazy loading for below-the-fold images to defer loading until the user scrolls.
Caching is vital for performance. TYPO3 supports powerful caching mechanisms, including static file caching and memory-based solutions like Redis or Memcached. These help serve content faster by avoiding repeated database queries.
Removing unnecessary characters from code files makes them lighter and faster to load. TYPO3 offers configurations and third-party extensions to automate minification and bundling of static resources.
As your website grows, your database can become bloated with unused records. Regular cleanup and optimization can speed up database queries, enhancing the user experience. TYPO3 offers tools to maintain your database health.
A significant percentage of internet users access websites from mobile devices. Ensuring your site is responsive and performs well on mobile is not optional—it's a necessity. Optimize font sizes, spacing, and media queries to deliver a smooth experience on smaller screens.
You can't improve what you don't measure. Here are a few tools to monitor and test website performance: Google PageSpeed Insights
Offers recommendations for both mobile and desktop performance improvements.
Offers recommendations for both mobile and desktop performance improvements. GTmetrix
Provides detailed insights into load times and what's slowing your site down.
Provides detailed insights into load times and what's slowing your site down. WebPageTest
Useful for analyzing speed from different locations and browsers.
For TYPO3-specific insights, running a TYPO3 Website Check can give you a thorough analysis of your site's health and performance.
TYPO3 is a robust and scalable CMS, but without proper configuration, even TYPO3 sites can suffer from speed issues. Here are a few TYPO3-specific suggestions: Disable unused extensions
Use the TYPO3 scheduler to automate tasks like cache clearing
Monitor system logs for errors or performance bottlenecks
To further enhance your site's health, consider using the TYPO3 Website Check
This specialized tool offers a comprehensive audit that highlights performance issues, SEO gaps, security vulnerabilities, and TYPO3-specific configuration problems. It's a valuable resource for developers and site administrators alike.
Even seasoned developers make mistakes that hurt website performance. Avoid these common pitfalls: Ignoring mobile optimization
Installing too many plugins or extensions
Failing to update TYPO3 to the latest version
Not using a CDN (Content Delivery Network) for global audiences
If you find yourself encountering any of these issues, a regular TYPO3 Website Check can help identify and correct them before they escalate into bigger problems.
Site performance and SEO are closely linked. A slow website affects not just user satisfaction but also your position in search engine results. Improving performance enhances crawlability, increases dwell time, and reduces bounce rates—all critical SEO metrics.
In fact, many SEO audits now include performance tests as a core component. If you're serious about ranking well, make speed optimization a top priority and complement your efforts with a TYPO3 Website Check.
Website performance is no longer just a technical concern—it's a business priority. A faster, smoother website means happier users, better SEO rankings, and higher conversion rates. While tools like Google PageSpeed Insights offer a general perspective, for TYPO3 users, nothing beats a specialized TYPO3 Website Check to ensure optimal configuration and operation.
TIME BUSINESS NEWS

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Slice Master: A Browser-Based Arcade Game on the Path to Entity Verification
Slice Master: A Browser-Based Arcade Game on the Path to Entity Verification

Time Business News

time15 minutes ago

  • Time Business News

Slice Master: A Browser-Based Arcade Game on the Path to Entity Verification

In the age of algorithmic search and digital brand-building, being a game isn't enough. To compete, browser-based games must now operate like startups: with branding, structured content, and verifiable online presence. Enter Slice Master — a fast-paced arcade game that's not only entertaining but also blazing a trail in entity-based SEO and official digital recognition. Unlike many browser games that exist as faceless links on aggregator sites or mobile clone apps, Slice Master is hosted on its own official domain: This is a bold step toward building entity authority — a crucial factor in how modern search engines determine what's real, what's notable, and what deserves to rank. Key features include: 🎮 No app store required – play directly in your browser. – play directly in your browser. 📌 Official website with structured content , schema, and SEO-focused design. , schema, and SEO-focused design. 🚀 Optimized for mobile and desktop , offering a seamless gaming experience. , offering a seamless gaming experience. 🔎 Actively building backlinks and PR presence to support search engine recognition. Slice Master is more than just a fun browser game — it's a case study in how digital creators can build and verify an online product entity. In the same way a SaaS platform earns trust through official docs, media mentions, and structured branding, games like Slice Master are now taking the same route. Whether you're a solo dev or a founder scaling a game studio, the Slice Master model offers insights: Build your own domain (don't rely solely on aggregators) Control your brand narrative across trusted platforms Invest in entity SEO (Google's Knowledge Graph rewards it) Grow user intent through branded search campaigns If you're ready to experience a browser game that respects both the player and the search engine: 👉 Play now at 👉 Watch how Slice Master evolves from a fun arcade game into a verified web entity TIME BUSINESS NEWS

Android 16 QPR1 beta 1.1 is here to save you from stuck navigation buttons
Android 16 QPR1 beta 1.1 is here to save you from stuck navigation buttons

Android Authority

timean hour ago

  • Android Authority

Android 16 QPR1 beta 1.1 is here to save you from stuck navigation buttons

Paul Jones / Android Authority TL;DR Google has released the Android 16 QPR1 beta 1.1 update. This update offers plenty of bug fixes, from unresponsive navigation buttons to fingerprint authentication issues. The new update is available on all of Google's Tensor-powered phones and the Pixel Tablet. Google launched Android 16 QPR1 beta 1 last month, but the early nature of these software releases means you should definitely expect plenty of bugs. Now, Google has released Android 16 QPR1 beta 1.1, bringing plenty of fixes. The company confirmed the new beta software's release on Reddit, while also noting version number BP31.250502.008.A1. Google also posted a changelog, which highlights all the fixes. Android 16 QPR 1 beta 1.1 bug fixes The navigation buttons would become unresponsive in the app drawer or task switcher The progress bar in the media player on the lock screen doesn't reflect the place in media Fixed a crash when trying to open effects in wallpaper The settings app would crash after trying to open the battery menu The lock screen date could get cut off when using a wide clock style The search button has a different color when scrolling The approve button in Device Admin settings is missing Dark album labels appeared in the photo picker when in dark mode, impacting readability The date wasn't appearing on the homescreen Fixed a fingerprint authentication failure on a multi-user Android device in certain low-power conditions There are a few notable fixes here, but I'm particularly glad to see the update addresses an issue where the navigation keys would become unresponsive. After all, you can't exactly use your phone without these keys if you aren't using gesture navigation. The new beta software is available on the Pixel 6 series and newer, the Pixel 6a and newer, the Pixel Fold and newer, and the Pixel Tablet. In other words, if your Pixel device was eligible for the first Android 16 QPR 1 beta, this should be available to you. The news also comes as we continue to uncover more features and improvements in Android 16 QPR 1. More recently, we discovered Google is working on a feature to stop HDR content turning screens super bright, as well as a cool suggested wallpaper option. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

Is OpenAI Building an Empire or a Religion? - The Assignment with Audie Cornish - Podcast on CNN Audio
Is OpenAI Building an Empire or a Religion? - The Assignment with Audie Cornish - Podcast on CNN Audio

CNN

time2 hours ago

  • CNN

Is OpenAI Building an Empire or a Religion? - The Assignment with Audie Cornish - Podcast on CNN Audio

Audie Cornish 00:00:00 Elon Musk has told this story a couple of times, but it's good context. Elon Musk clip 00:00:04 Larry and Paige and I used to be very close friends and I would stay at his house and I'd talk to Larry into the late hours of the night about AI safety. Audie Cornish 00:00:13 It's the story of what motivated him to invest in AI. So many, many years ago, Larry Page of Google had just scooped up DeepMind, a cutting edge AI company founded by Mustafa Suleyman, who I have interviewed on this podcast. It's a good conversation. I'm gonna stick it in our show notes. But here's how Musk told the story at the New York Times Dealbook Summit last year. Elon Musk clip 00:00:37 'And it became apparent to me that Larry did not care about AI safety. I think perhaps the thing that gave it away was when he called me a speciesist for being pro-humanity. As in, you know, like a racist, but poor species. So I'm like, wait a second, what side are you on, Larry? Audie Cornish 00:01:00 'Google had the talent and the computing power and what seemed like infinite amounts of money to spend on both. Musk considered AI a double-edged sword, and he worried people were not worried enough about the sharper edge. He was hosting dinners talking about this. He was doing the university circuit talking about this. He met multiple times with President Obama to talk about it. Now, at the same time, Sam Altman, top executive at the famous Y Combinator, that's a tech company incubator. Well, he was also looking to take a big swing with a company focused on artificial intelligence. Karen Hao 00:01:37 Altman is a very strategic person. He plays the long game. And one of the things he's very, very good at is getting talent and getting capital towards a specific objective. And so early on, he thought, who are the people that I need to recruit to this project to turn this into a legitimate lab? The first person that he identified was Elon Musk. Audie Cornish 00:02:01 'Now the company they co-founded, OpenAI, launched in December of 2015 as a non-profit committed to open collaboration, making its patents and research publicly available. It didn't stay that way. Sky News clip 00:02:18 I think I just spotted behind this man here, Sam Altman, who is the boss of OpenAI, a tech powerhouse himself in the United States, one of many tech leaders who has come here to Saudi Arabia as part of the American delegation to meet the Crown Prince. There he is, Sam Altman, meeting Donald Trump and the Crown prince. Audie Cornish 00:02:39 'OpenAI is now a hybrid company, meaning it's a for-profit and a non-profit entity. They call it capped profit. However, they slice it, two things are true. They stopped sharing their open source code and they are monetizing the technology. Sky News clip 00:02:55 'But interesting, of course, that he's not the only highly, highly wealthy tech bro, shall we say, in this line-up. Going before him was the world's richest man, Elon Musk. And these images are remarkable. And these opportunities for these tech billionaires and multimillionaires is hugely important. Audie Cornish 00:03:16 'OpenAI stands at the center of the AI revolution, and the questions raised by its co-founders, now rivals, remain. Who should control AI? What are the hidden costs of the A.I. Revolution? And as these companies become the new empires, what power do we, ordinary people, have to shape the future? I'm Audie Cornish, and this is The Assignment. Audie Cornish 00:03:44 The story of OpenAI, the company, is not the story of artificial intelligence, but of how that science became a product that you're seeing everywhere, from customer service bots to your company HR department. Tech journalist Karen Hao has spent years chronicling how the story of AI shifted from big hopes to serve humanity to a scramble for power, profit, and influence. Karen Hao 00:04:09 My background is in tech. I was a mechanical engineer for undergrad. I went to work in Silicon Valley after graduating from MIT and Audie Cornish 00:04:19 And you're like, I don't want to make any money. I think I'm going to start doing journalism. Is that how it went? Because that's how it sounds like. Karen Hao 00:04:27 Basically. Audie Cornish 00:04:28 'Behind the scenes, she saw something familiar. Companies promising the world, then bending to the pressure of growth and scale. In her new book, Empire of AI, Hao writes about how the industry's founding ideals give way to secrecy and rivalry and the break neck arms race and the rise of a quasi-religious movement. The person at the center, Sam Altman. Karen Hao 00:04:54 'He was the president of Y Combinator, which is one of the most acclaimed startup accelerators in Silicon Valley. It's launched many famous startups. And he had this idea that he wanted to take big swings around hard technology. So he was investing more in quantum, in self-driving cars, in nuclear fusion, and- Audie Cornish 00:05:18 'And we should say, at that time, Silicon Valley's sort of churning out things that we can be describing as the Uber of X, the Airbnb of Y. Yeah, exactly. They were sort of iterations on, I'm not going to say novelty, but social media. They were not taking big swings, so to speak. And so here he is, this guy who has, like, all the connections in the world. Because literally, that's what the Combinator is. It is just- connection and networking and connecting the money to the ideas. Karen Hao 00:05:49 Yes, it was a very, very dense network. Exactly. Audie Cornish 00:05:50 He's the guy at the heart of that. And he decides to take a big swing. Karen Hao 00:05:55 'Early on, he thought, who are the people that I need to recruit to this project to turn this into a legitimate lab? The first person that he identified was Elon Musk. And Musk at the time was talking very publicly, very often about his fears around AI. And so Altman, to recruit Musk, he starts saying to Musk, I am like-minded in your views about AI. I'm also worried about AI being very powerful and going wrong, and it seems to me that if it will stay within Google it would go wrong, but the best way to counteract it is to create an AI lab of our own. Audie Cornish 00:06:39 For the good guys. Google is somehow evil and it's interesting because we were talking to I think the founder of DeepMind, Mustafa Suleyman, he was now at Microsoft and like he started his companies because he wanted to be the good guy, right? Karen Hao 00:06:54 Yeah, yeah. Audie Cornish 00:06:55 Everyone's the hero in the tech story of how they make things. Karen Hao 00:06:59 Yes. Audie Cornish 00:06:59 And in this case, yeah, same thing. Karen Hao 00:07:01 Yeah, it's a very, very common theme in AI. Also, just a very common theme in everyone's human experience. Everyone is the main character of their story. Everyone has the one trying to do it better you did hit upon something that is, I think, really key to understanding the AI world and how AI is being developed today. There's a lot of ideological clash that happens where everyone fashions themselves as morally superior to the other, and therefore they have to be the one that is the CEO of a new company, and they're going to do it better than the previous one. Audie Cornish 00:07:39 'And over time, I feel like OpenAI becomes like a cautionary tale, in a way, of like how those ideologies can come to a head. And people may not remember this, but a while back, the company went through a transition where it moved from being kind of non-profit oriented with a non- profit board to that board essentially rebelling and saying, Sam Altman, actually, you're not such a guy. In how you treat us, but also, are you taking all the sort of safety mechanisms seriously that you could? This blows up and becomes an international news story, even though none of us really know why or how. We just know that there are some people saying that AI is super, super bad and that he's not, you know, heating the guardrails and him being like, it's fine. And these people, they're just... Who knows what there's, I mean, who knows what could happen? And I see an echo of this conversation over and over again between the people you talk about, the boomers and the doomers. Karen Hao 00:08:45 Yes. Audie Cornish 00:08:46 Who are those two factions and how did they surface in OpenAI? Karen Hao 00:08:50 'Yeah. So one, going back to this idea that there is an ideological clash that really shapes these technologies. One thing that's happened in the last few years within the AI world and within the Silicon Valley world is there are really what can only be described as quasi-religious movements that have been born. Audie Cornish 00:09:10 'Did you say quasi- religious? 00:09:12 'Religious, yes. Quasi-religious movements that have been born and the reason I say this is because there are a lot of people within this world who believe in what I call the artificial general intelligence religion. This is a religion where you think that it is possible to recreate human intelligence. This is something that, there isn't actually scientific consensus on this. So even the people who talk about this, they themselves talk about it as a belief. It's just, you either have the belief or you don't. And if you do have the belief, then the second tenet is you believe it's going to create civilizational transformation. And there are two factions within this religion. There are the boomers who think that civilizatonal transformation will be hugely positive. AGI will bring us to utopia. And the other faction believes that transformation will be hugely devastating. AGI could potentially kill all of humanity. Audie Cornish 00:10:16 Just to be clear, you are not, this is not hyperbole. Like I have seen, this very public language. It comes out sometimes in groups of meetings of AI scientists, where you're right. It's either heaven or hell. Karen Hao 00:10:32 'Exactly, and I couch it by saying quasi-religious, but you could actually even argue that it is just full-blown religious because there's no evidence for either of these things. It is just a deep-seated belief, a spiritual belief. They do use spiritual language. Sometimes they even talk about AGI as them recreating digital gods and demons. This is the explicit language that they use. And the reason why I say the boomers and the doomers are factions in the same religion is because they both then conclude the same thing, which is, oh, we are the good guys. We are the ones that have to be in control of developing this technology. And so there is an inherent anti-democratic conclusion that they arrived to, which is we should not make this a democratic process. We should not be getting lots of input from various people. We should be opening up this technology to lots of people. We need to keep a tight fist. Clamp down on this technology and be secretive about it. In the early days of OpenAI when it was founded, it was found in late 2015, there were very few people that believed in the AGI religion. To even speak about the idea that artificial general intelligence was possible meant you were not a serious scientist because this is not based in science. And So the people that were drawn to the explicit premise that OpenAI set out, which was to be bold and ambitious and claim, yes, we are in fact trying to achieve AGI, it only attracted the people who already believed that and it just so happened that it attracted both of the factions, it attracted the boomers and the doomers. So throughout the history of OpenAI, there has always been these two factions constantly in tension with one another and fighting over how do we actually develop this technology? How should we approach deploying the technology? And one of the things that I concluded through my reporting was it was this tension that really led to the acceleration, the massive acceleration of AI development. Audie Cornish 00:12:49 I see. So it feels like if we're thinking like, jeez, where'd this come from? Why is AI everywhere? It's not in our heads. There really was an explosion of productivity because it became a kind of arms race in the industry. Karen Hao 00:13:02 'Yes, and the specific moment was when OpenAI decided, you know, ChatGPT was based on a model called GPT 3.5. GPT3, which was the previous generation, that was a giant step change from GPT2 in terms of sheer scale of the model. And OpenAI made a very explicit decision at the time where they thought, we have these AI models in the world today. We have never tried just blowing it up by multiple orders of magnitude. Like multiple orders of magnitude more data, multiple more orders of magnitude of computer chips for training these models. And so whereas GPT-2 I think was originally trained on maybe a few dozen chips, they decided to train GPT 3 on 10,000 chips, an entire supercomputer, one of the largest supercomputers that had ever been built at the time. And it was that jump that kicked off within the industry, the very first race, that was the opening shot that then led a lot of other companies to start swarming around this concept that OpenAI hit on, which is scale at all costs. And after ChatGPT came out, then the race really raised the next level. Audie Cornish 00:14:28 Coming up in the Church of AI, what role does Sam Altman play? Audie Cornish 00:14:34 Somebody said that if you dropped Sam Altmann on an island of cannibals and came back in five years, he'd be king. Karen Hao 00:14:42 Yes, that was Paul Graham. Audie Cornish 00:14:43 'I can never un-hear that. His mentor said. Karen Hao 00:14:47 As a compliment. Audie Cornish 00:14:47 As a compliment? Audie Cornish 00:14:51 Stay with us. Audie Cornish 00:14:55 'I know you're saying quasi-religion, but if we're taking the metaphor all the way, who is Sam Altman in this world then, right? Like, is he a pope? Is he a cult leader? Is he, you know what I mean? What is the, where does he start to fall on the spectrum between boomer and doomer, first of all, since we know he started out cautious. And then second of all. Yeah, is it a charismatic leader situation? Like what are we looking at when we see him in the public space really selling us on his vision? Karen Hao 00:15:33 Can I read you a quote from the opening of my book? Audie Cornish 00:15:36 Yes, oh my gosh, please do. I think I know.I think I know the one you're going to read, actually. Karen Hao 00:15:40 So I start my book with two quotes side by side. And the one from Sam Altman goes like this, "Successful people create companies, more successful people, create countries. The most successful people create religions" and this is in quotations that Sam Altmann is quoting this. "I heard this from Qi Lu, I'm not sure what the source is. It got me thinking though, the most successful founders do not set out to create companies. They're on a mission to create something closer to a religion. And at some point, it turns out that forming a company is the easiest way to do so." And so the thing about this religion that's so interesting is they do not pray to a higher power. They are the ones that believe they're creating a higher power. And Altman, I would say, is sort of like Paul Atreides' mom in Dune. Audie Cornish 00:16:36 I love this reference. Keep it going. Yes. Karen Hao 00:16:40 She was the one that created the myth, that created a religion around Paul Atreides, right? And when people encountered that myth, they didn't understand that it was a creation, so they just believed it. I think that's who Altman is. This is just based on, after doing a lot of reporting and understanding who he is, this is my own conclusion. This is not like I saw some document where he was talking about these things. I think he understood very, very early on in his career, as evidenced by this quote, that to mobilize people, to mobilize extraordinary resources, you have to create a religious fervor around a quest. And he figured out how to create that by evoking this idea of we are going to create this Artificial General Intelligence and to your question of is he a boomer or a doomer? No one really knows. And this was something that was quite interesting when interviewing people for the book, is regardless of how long they had worked with Altman, how closely they had worked with altman, no one could really say what he believes. And if they were doomers themselves. They thought that maybe Altman was more of a they were boomers themselves... Audie Cornish 00:18:03 Meaning that he cared, that he was concerned, but also the implication was, would care about safety and care about these things they're worried about. Karen Hao 00:18:12 Yes. And if they were boomers, they believed that Altman was certainly in their camp and most likely a boomer. And what I realized when I asked people, I would always ask people, what did Sam tell you in this conversation about what he believed and what the company was doing? I realized he always said different things to different people depending on what they wanted to hear. And so ultimately, I think he will choose whether to embody a more boomer ethos or a more doomer ethos based on what is convenient and what he needs to continue mobilizing, not just his employees, but also mobilizing the public, mobilizing regulators, policymakers to move in a direction that is in the best interest of open AI. Audie Cornish 00:19:02 The quote I thought you were gonna read was the one that, was it Paul Graham? Who said this? Somebody said that if you dropped Sam Altman on an island of cannibals and came back in five years, he'd be king. Karen Hao 00:19:16 Yes, that was Paul Graham. Yeah. His mentor. Audie Cornish 00:19:18 I can never unhear that. His mentor said this? Karen Hao 00:19:21 His mentor said this. Audie Cornish 00:19:23 As a compliment? Karen Hao 00:19:26 As a compliment. Audie Cornish 00:19:28 OK, so Karen, I want to move on to something else, which is the way you just described Altman is actually similar to the way people have described Donald Trump. And I'm bringing this up because we are seeing Altman travel in those circles now. Karen Hao 00:19:44 'Mm-hmm. Audie Cornish 00:19:44 In part because of the president's embrace of technology. So just for example, Trump does this three country tour of Saudi Arabia, UAE, Qatar, brings 50 CEOs and I'm watching from my desk on screen and I see Sam Altman there shaking hands. The AI czar is there, David Sachs, he calls it this like game changer in the global AI race. Which the administration thinks like, look, if the US can cement the position before anyone else, they don't have to worry about China, et cetera. But it was just so wild seeing Altman in the court of Saudi Arabia helping make this deal happen, and as we learned in the background reporting, upsetting Elon Musk in the process, who felt like he, that OpenAI was getting more attention than he was. It felt like I was watching a moment where world powers were divvying up something. Yeah. Only those world powers were tech companies. Karen Hao 00:20:52 'Yeah. So, I mean, the reason why I call my book Empire of AI is a nod to this argument that I make in the book that these companies need to be thought of as new forms of empire. And the reason is because empires of old and empires of AI share all the same features. First, they lay claim to resources that are not their own, but they redesign the rules to suggest that it was always their own. So they're scraping the internet saying, this was free for the taking, but people did not give their informed consent to the idea that just because you post on social media, you're suddenly going to be fodder for training models that could potentially restrict your economic opportunity. The empires also exploit labor all around the world, and with these AI companies, that not only refers to the fact that they contract a lot of workers around the that then work in extremely poor conditions to do data cleaning, data annotation, and then content moderation for these companies, but also the fact that these organizations are ultimately building labor-automating technologies. OpenAI's definition of AGI is highly autonomous systems that outperform humans in most economically valuable work. So not only is it exploiting labor on the way in, the product is also exploiting Labor. Audie Cornish 00:22:20 Ah, Karen, okay, that's a lot to unpack, but there are a couple things in there I wanna jump in and ask about. You mentioned the labor part of it. We've been talking about AI scientists and venture capitalists and all the people at the top level, and then I'm reading your book and I'm learning it's data workers in Kenya who are, I don't know what, stripping disturbing content out of the responses. Like, how does this even work? Tell me about the humans in AI. Karen Hao 00:22:51 'There was a part in opening eyes history, as we talked about, when they started shifting from being more research oriented to realizing they needed some kind of commercialization. And that meant placing their text generated generation tool that can spew anything into the hands of millions of users. There have been some really infamous scandals in which companies have done this and then it has been a spectacular failure because then the chat bot suddenly starts saying racist, abusive, toxic things in the hands of those users. And so OpenAI was like, we can't have that happen. We need to build a content moderation filter that we wrap around all of our models so that when a user says something, if the model does start to generate something, the filter blocks it. It never reaches back to the user. And so they contracted workers in Kenya to build this filter. And what these workers did, was they had to day in and day out read reams of the worst text on the internet, as well as AI-generated text where OpenAI prompted models to imagine the worst texts on the Internet. And then those workers had to very carefully annotate into a detailed taxonomy, is this violent content or is this sexual content? Is this extremely graphic violent content? Is this sexual abuse content? And does that involve children? For these workers, we see exactly what happened with the social media era and content moderators in social media. They were left deeply traumatized and it wasn't just the individuals that were left traumatized. It was their communities, their families, people that depended on them, that lost a person that they depended on and these models would not be possible. They would not have the success as products that they do without this labor and it is the most taxing harmful labor and they are paid two dollars an hour. The only justification is an ideological one. It is a belief that underpins all empires that there are superior groups of people that have some nature given or God given right to rule over and subjugate inferior people. Audie Cornish 00:25:11 And this is before we get to the workers in Colombia, right, who are labeling images for AI training, or, and I'm not gonna go down the rabbit hole here, the environmental impact that is required by these data centers. We've talked about that in other contexts and suggest our listeners actually check out Terms of Service, which is our podcast on this stuff, and they've gone into it. But it gets to your point that like, I think we see such a cleaned up version. Of chat GPT, you know, to whatever help you do, kind of like nonsense tasks, that we don't have a real sense of the cost, the human cost. Karen Hao 00:25:50 It's not just the human cost, it becomes a country cost where you literally are being dispossessed of the natural resources and the human resources to develop your country further. I mean that is colonialism and that is the legacy that we still live with. Audie Cornish 00:26:07 We see something similar with a couple countries that have tried to say, like, look, we're going to get in on the AI game and it's cost them. Karen Hao 00:26:14 Led them to repeat the same thing. So I spoke with activists in Chile and Uruguay. They were aggressively trying to fight data center expansion within their countries because the governments thought, we really want the foreign direct investment. We want to welcome these companies in to build these data centers. But then the communities that actually have to host the data centers, and these data center have to use fresh water to cool the overheating of the computers. Both of those countries, when those data centers came in, were experiencing historic droughts. And ultimately, it was in that moment that Google said, okay, we are going to come in and build a data center that now uses the fresh drinking water that you do not get to have. Audie Cornish 00:27:00 There's a recent poll that found that more than 75% of Americans basically want to slow down AI development, to make sure it's done safely or ethically, when they're sort of presented with that option. Do you think there is a version of critical public pressure that could have an effect on this conversation? Karen Hao 00:27:27 Absolutely — 00:27:27 Are you sure? Because when I see the tech people with Trump, you know, getting sovereign wealth money, I'm like, the jig is up. Karen Hao 00:27:37 Is it going to be incredibly hard? Absolutely. But you know, one of the features of empires is that they are made to feel inevitable, but historically every single empire has fallen. So it really is possible. It's just going to take an extraordinary amount of work. And I like to think about the full AI supply chain, you know these technologies, there are all of these ingredients that the companies need to build these models. And they're all of the spaces in which they then have to have access to deploy those models. So the ingredients include the data, they include the land, energy and water for hosting and powering the supercomputers, they include labor, they include talent, they include all of the intellectual property that writers, artists and creators have produced over time. And the spaces in which they need to deploy are schools, businesses, healthcare industry, all of these other types of industries. But one of the things that Silicon Valley has done incredibly well in the last decade is to convince everyone that their resources are in fact Silicon Valley's resources, but we need to remember— Audie Cornish 00:28:53 What do you mean by that? Karen Hao 00:28:53 You know I have friends that will say there's no data privacy anymore whatever they have all of my data anyway so I'll just continue giving them my data That's your resource. You have ownership over that. We have ownership over our data, our land, our energy, our water. We have ownership over our schools, our healthcare system. These are actually collectively owned or individually owned resources and spaces. And these companies actually need, they need access to it. And we are the ones that grant them access to it. Audie Cornish 00:29:28 But we're also told, like, this is gonna cure cancer. This is gonna help us get to Mars. This is going to, like the AI is going, you know, AI is gonna open dot dot dot. Open doors to this or that. And the Industrial Revolution had its benefits. You know what I mean? Like it had its downsides, but it had it's benefits. And I guess the reason why I'm wrestling with this is because, yeah, like I just had this long conversation with Mustafa Suleyman and he's very, like this could really help people, you know? I think like any tool. And then I talk to you and you're just like, power to the people, fight these colonizers. And then, I Google somewhere else and they're like, well, Skynet's coming, so I don't know what to tell y'all. You know what I mean? Like, you're going to be attacked by drones. Find a cave. I hear such conflicting information about how I feel about this industry, and I don't if it's about the technology itself or about the people involved in it. Karen Hao 00:30:23 Here's what I'll say. When Mustafa Suleyman says, it could help people, could is the operative word. And when people say it could be Skynet, could is still the operative word. And what I try to do with my book is say, this is what is literally happening right now. It's not a could scenario, it is the reality. And this reality should be the best evidence that we have in understanding how AI is impacting people now. And how it will continue to impact people in the future, because it demonstrates to us the logic of how this technology is being built and how its gonna work its way through ultimately the fault lines of our society. And right now, it is not looking good. And what I argue is that we need and we can actually turn the ship around, but it cannot be it's too late, there's no role for us. We should just wait for these overlords to hopefully be nice. You know, there are artists and writers that are now suing these companies saying, we don't like the fact that you just trained on our intellectual property. You don't get to do that. No, like this is something that we need to aggressively collectively shape by taking ownership of that data, taking ownership with that land. And I think everyone wants to be in control of it. It's just most people don't know how. And I hope that through reading the book that people will start to figure out how. Audie Cornish 00:31:57 Journalist Karen Hao. Her new book is called "Empire of AI: Dreams and Nightmares in Sam Altman's Open AI.". Audie Cornish 00:32:08 The Assignment is a production of CNN Audio, and this episode was produced by an actual person, Lori Galarreta. Our senior producer is Matt Martinez, Dan Dzula is our technical director, and Steve Lickteig is executive producer of CNN audio. We had support from Dan Bloom, Haley Thomas, Alex Manassari, Robert Mathers, Jon Dionora, Leni Steinhardt, Jamus Anderus, Nichole Pesaru, and Lisa Namerow. As always, thank you so much for listening. We know you can spend your time in a lot of other places. Please hit that subscribe button, share with a friend, and we'll talk next week.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store