Latest news with #AIArt


CNN
24-06-2025
- Entertainment
- CNN
Will AI Replace Human Artists? One Study Says Not Yet - Terms of Service with Clare Duffy - Podcast on CNN Audio
Clare Duffy 00:00:01 'Welcome to another episode of Terms of Service. I'm Clare Duffy. If you've been on the internet at all in the past few years, whether you realized it or not, you've probably come across art that was generated by AI. With programs like Midjourney and DALL-E, which were trained on visual data from across the internet, users can generate images from text prompts in just seconds. Some of these images are beautiful and striking, but a lot of them have contributed to a growing AI slop problem across the internet. And all of this has artists worried about their jobs in a field where it can already be challenging to get viewers' attention. Legal battles have started to emerge over this AI-generated art, too. Earlier this month, Disney and Universal teamed up to sue Midjourney, claiming its AI-generated images of their famous characters violate copyright law. And this trend raises an even bigger question too, what does all this mean for the future of art and creativity? We probably can't answer this in a single episode, but I'm excited to have Sheena Iyengar here to help me think about where to start. Sheena is a professor at Columbia Business School where she and two PhD students recently published a fascinating study about people's perceptions of art. And how that perception changes if the art was generated by AI rather than created by humans. In the study, she showed participants art that was labeled as AI-generated, as well as art labeled as human-generated and had viewers rate them. We'll get into it all, but spoiler alert, people still seem to like human-made art better. Clare Duffy 00:01:45 Hi, Sheena, thank you for being here. Sheena Iyengar 00:01:47 Hi, thank for having me. Clare Duffy 00:01:49 'So it feels like AI-generated art has exploded over the past couple of years with more people using Midjourney and DALL-E and other programs that you describe the image that you want to create and then it pops out a result. When did you first start noticing this phenomenon? Sheena Iyengar 00:02:08 'It was in 2020 when the world was shut down. I actually ran across this pretty well-known computer, well, he's a mechanical engineering professor but does stuff using generative AI. And so I discovered that he was making a robot, a robot that he called an artist. And so what he did was he fed the code all kinds of things, like leaves, trees, everything. It was over a million different let's call them like pieces. And so then what he would do is he would ask the robot to create a piece of art. Now, he didn't train it on what art to create, but he absolutely fed it pieces of art that had been successful. So... Clare Duffy 00:03:07 And this is like a physical humanoid robot? Sheena Iyengar 00:03:10 'Think of it as a printer. Okay. And it would literally in 24 hours, create a painting. And he actually had examples of these paintings. And he was training it to make impressionist art. And we actually showed pictures of these impressionist paintings that were created by this artist, AI artist. And people could not tell that it wasn't made by a human. So that was my first introduction. I saw this and I was like, hmm. This is interesting. What does this mean for creativity? What does it mean for the difference between human made versus non-human made? Clare Duffy 00:03:56 Why is it that this technology and the outputs of it have improved so much in the past few years? Like it feels like back in 2020, 2021, a lot of the AI generated images you would get would be pretty cartoonish. It was obvious that they were not real, but now you are more and more seeing very photorealistic images coming out of these AI generators. Why is that? Sheena Iyengar 00:04:21 I mean, you could ask the same question for lots of technologies. It doesn't have to just be AI art, right? I mean why did the, initially when you had the camera that came out in the 19th century, you know, artists didn't think it was beginning to become competition, but you know pretty fast it became better and better and better. Of course we all get better and better because what's it doing? It's iterating. Just like a human being, we're iterating, you know? You see it and he's, ah, that's wrong. Let me fix that part. Then you iterate again, and you fix that. Now, did the camera actually kill art? Well, no. Did it kill portraits? Yeah, right? Because it's cheaper, you could be more precise, and you can be faster to just take a photo of someone than have someone actually make a portrait of you. Just like with the camera, the average individual Suddenly has access to something they didn't used to have you know every single one of us can now take a selfie. That's kind of And every single individual can now also take an idea that they're trying to explain to their friends or they're just trying to be therapeutic with themselves and be artistic and they can now take this app and it sort of levels the playing field. Are we all gonna be equal? No. Are some of us going to be better at using the generative AI to make art? Yeah. And we're gonna be more creative about it and that'll become its own art form. And that's what you're seeing. Clare Duffy 00:05:58 'So now that we know a bit more about how AI-generated art is made, what do humans make of the art that's created by computers? That's after the break. 00:06:11 'Okay, so Sheena, you recently worked with two Columbia Business School doctoral students to research how people react to AI-generated art compared to how they perceive human-made art. What question were you trying to answer exactly? Sheena Iyengar 00:06:27 'So we looked at two things. First, can people actually tell the difference between art that's been made by the human versus the non-human? It turns out we really can't. Even though people swear they can tell them apart, they actually don't know if it's human or non-human made. But if they it's been made by a human they will price it higher. So that means they value it more than if they think it was been made by AI. Clare Duffy 00:07:02 And how did you go about this research? What kinds of art did you show the study participants? Sheena Iyengar 00:07:08 'Well, we couldn't show them art that's famous, right, because then they know. Although you'd be surprised because there's times when I show people the initial self-portrait that Picasso made of himself and people now increasingly tell me, oh, that must have been made by AI. But we, you know, this is a few years ago, so people weren't quite as savvy or at least as convinced that everything is made by AI. So, you know we just showed them art that wasn't famous. Uh, and in some cases we had them tell us, do you think this was made by human or AI? Uh, in some case, as we told them it was made by a human or AI, and, you know, obviously we didn't tell them the truth. And then we asked them to price it or how much they would sell it for. So that was the basic design. Clare Duffy 00:08:02 'And so what did you find? It sounds like people do value human-made art above AI-made art, at least for now, if they can tell the difference. Sheena Iyengar 00:08:14 If they believe it was made by the human, so it could have actually been made by AI, but if they thought and labeled it themselves as being made by a human, then they priced it higher. Clare Duffy 00:08:26 And how much higher are we talking, like hundreds of dollars more or? Sheena Iyengar 00:08:31 Oh, I wouldn't focus on the exact price because we didn't tell them this was fancy art. Clare Duffy 00:08:38 What surprised you most about your findings? Sheena Iyengar 00:08:42 'I was mildly surprised that they couldn't tell them apart. I didn't expect that. I suppose after we observed that, I was no longer surprised. We actually even tried it for fun. We actually put it in the Columbia Business School on the main TV screens as you walked in, and we had random people, like MBA students, as they're walking in vote. They couldn't tell either. So I was a little surprised by that. I go back and forth as to whether I'm surprised by the fact that people value something they believed was human made as higher. So of course, if I know that this is let's say a handmade rug versus a machine-made rug, then of course we take it as a given. We would pay more for the handmade thing. So in that sense, it's not surprising. Clare Duffy 00:09:44 'Did any of the survey participants sort of elaborate on that piece, why they felt like the human-made art is more valuable? Is it just because artists put a lot of heart and soul into their works, whereas AI doesn't have heart or soul? Sheena Iyengar 00:09:59 'Yeah. So people think that if it's made by a human, there's more intentionality behind it, that there's a story. I mean, the reason why you made this guy in the way you did is because there's something related to, I don't know, your personal story or, you know, something you saw that you were trying to communicate to the rest of us to see a kind of meaning. Whereas if it is generated by a non-human, you know, it's just random. It doesn't feel meaningful. Clare Duffy 00:10:30 'Has this study changed how you think about AI-generated art? Sheena Iyengar 00:10:36 'Yes, so my Ph.D. student and collaborator, Carl Blaine Horton, I remember he first brought me this idea. I was like, yeah, why would anyone want this AI thing? I mean, it's just weird. You can't call that a real artist. And then he started to send me poems. Because, you know, before AI art, they were doing all this, like, poems and they would be tested by some touring prize or whatever. And he's like sending me these poems and I'm like, no, no you can't like come on a poem has meaning to it. And he said to me so what if seeing something even if it was randomly generated by a non-human gave you ideas that you otherwise wouldn't have had. Would you consider that meaningless? I said well I guess you're right. It is, it could be used for that. I don't know if you know this, but I am blind, and I love to go to art galleries and learn about the new art. And I will tell you that one of the displays which really fascinated me was we went in and they had like a million new colors that were generated on a canvas using a computer software. I thought to myself, hmm, that's very interesting. People might actually start to create more colors that they otherwise might not have thought of and they might not have thought for two big reasons. A, we're so used to seeing certain color patterns in our everyday life that we're naturally going to put certain colors together or assume certain variations in color just because of habit. Imagine if there was another entity that didn't have to worry about your predisposed biases and could just be egalitarian and create whatever it wants to without thought to the past. That could be very interesting. And so that was one thing that struck me. And then the second thing that struck me is even if it has no intentionality, which clearly it wouldn't, it could still teach me something. So I believe that the camera taught people something about their world, that they didn't see as easily or as clearly or as precisely as they learn to see through the use of the camera. And I think the same thing applies here. Clare Duffy 00:13:47 'That's so interesting. Yeah, I hadn't thought about that idea that, like, the creativity of it, the sort of randomness of it could potentially create something new in a positive way. I also think when we talk about the reaction to AI-generated art, we also have to talk about the reaction from human artists who have raised concerns example, their works have been used to train AI without their consent. How big a concern is that in your mind? Sheena Iyengar 00:14:19 So I do think they deserve IP, you know, they are using their art for which they would have normally gotten paid. So I think, you know, I think the same principles that we normally apply to patents or trademarks should apply here. I think the laws just have to catch up to that. Clare Duffy 00:14:40 Yeah. I'm wondering if, based on your research, you think the worry that AI could put artists out of work is a valid one? Like, should artists be worried about their livelihoods at this juncture? Sheena Iyengar 00:14:56 'I think so, I'm sure. So if you were the artist that only made portraits and you're not willing to change, it'll be a problem. So one of the examples that I use is the case of Picasso. He had two self-portraits. There's one self-portrait he did in 1901 and another self-portrait that he did 1907. There's a huge difference between those. And that's because he, during those ensuing years, is dabbling and trying to form a new style that will now have greater value given the advent of the camera. Now he gets influenced by impressionists, and then he mixes that with his with the influences of African art. And that's what leads him to eventually, starting in 1906 with La Demoiselle d'Avignon, he starts to create this new art form called Cubism. So sure, if he stayed with his style in 1901, he wasn't gonna go anywhere. Clare Duffy 00:16:18 'This was something I was going to ask, and now I'm almost even more curious what you're going to answer based on our conversation thus far, but it was a question for me going into this, like should we even consider AI-generated art to be art in the first place or if there's something else that we should call it? What is your thought on that? Sheena Iyengar 00:16:36 So, a human made it, in the sense that a human decided what it would be. And in the end, it's not AI that gets to decide if you like it or if you found it interesting or meaningful. That judgment still resides with humans. So I think, look, if you're willing to say that this thing that a camera took, you know, held in a human hand, is art because of the way in which the photographer took it. Same thing, I don't see why the principle is different. Clare Duffy 00:17:26 'In your research, it sounds like at least in some of the cases, you labeled the pieces that you were showing, the study participants, human-made art or AI-made art. But of course, in the wild, it can be really difficult, increasingly so, to tell if a piece of art online has been AI-generated. Do you think that that labeling is important It's like going forward? Sheena Iyengar 00:17:54 Absolutely. Clare Duffy 00:17:54 Why? Sheena Iyengar 00:17:54 Because the same reason why other things, you know, like masterpieces in the old days, you want to know whether it was like, it was just a replica or whether it was the original. Same thing with, you know jewelry or as we were talking about rugs, I don't know why I'm so fixated on rugs. I recently got a handmade rug. So it's about took three years to hand make this silk rock. Clare Duffy 00:18:23 Wow. Sheena Iyengar 00:18:25 And it absolutely, you can, I mean, I'm blind, but everyone tells me you can immediately tell that it's different from a machine made thing. Now, even if you couldn't tell, I think most people would feel cheated if they didn't know that it was handmade versus machine made. It just, it does have some greater value. Clare Duffy 00:18:54 Well, and the idea that somebody put time, like time and life into something feels meaningful in some way at least. I'm curious too, I mean we've talked a lot about art that people will see on a screen, but I wonder if, given your findings, you think there is going to be a movement towards more people going to see art in person in museums or galleries. Sheena Iyengar 00:19:20 I think you're already beginning to see that, right? They are, even if you go to the galleries, there's much more emphasis on tactile experiences, auditory experiences. I mean, even recently, if you went to the Biennale, right, there's a lot more of really trying to address other senses other than visual. Clare Duffy 00:19:45 On that note, I'm curious, I mean, because you mentioned that you're blind, do you see a world where AI could make art more accessible to blind folks? Sheena Iyengar 00:19:59 Yeah, I mean, you know, we we often think blind people don't have visuals in their mind. But actually, a large part of your visual life has very little to do with your sight. So I constantly live in a visual life, I am constantly trying to describe things in a way that other people will see it in my head. Make all kinds of visuals. So sure, nowadays you could have a blind person interact with AI and here's a way you could make a collaboration between human and AI something very interesting, right? You could take a blind person and put AI in their hands and now suddenly that blind person could show the sighted what they see. Clare Duffy 00:20:57 Hmm yeah... Sheena Iyengar 00:20:58 That would create value. Clare Duffy 00:20:59 Have you tried that? Sheena Iyengar 00:21:01 No, but I just got the idea as I was talking to you. Clare Duffy 00:21:05 I'm like, I want to do it now. That sounds really interesting. Sheena Iyengar 00:21:11 Yeah, and you could even imagine not just any kind of person, like a deaf person. Clare Duffy 00:21:16 Yeah, I mean, really, I can imagine anybody. We all have sort of inner lives and inner worlds that we think about in different ways. What do you think that we, as consumers of art, should consider next time we come across a piece of AI generated art. Sheena Iyengar 00:21:34 Well, I think you should always ask yourself, what does this enable me to discover or see or learn that I didn't think about before? And that's true no matter what. I mean, you know, if you think of like Andy Warhol and, you know, Campbell Soup Cans, right? He gave us a new way to see those. Unto itself, you wouldn't say it was particularly, you know hard to do. Clare Duffy 00:22:08 Right, that perspective. Sheena Iyengar 00:22:09 Yeah, he just gave us a new to think about our lives, the world, Campbell Soup cans. Clare Duffy 00:22:16 That's valuable. Yeah, that's interesting. Is there anything I didn't ask you that you think is important to mention about this? Sheena Iyengar 00:22:24 I think there's always a new technology and when that new technology takes over our lives, we get worried. And that's not just true of art, it's true of anything, I mean the car, you know, now we forget how much disruption at the time the car created, or the train. But we as humans are so good at adapting such that we now give ourselves new jobs to do. And I think we shouldn't underappreciate that. Clare Duffy 00:23:07 Yeah, it's so interesting. A big part of the promise of AI, broadly, is like it will take up some of the sort of busy work tasks that people don't want to do and leave more time for things like creativity and human expression. Sheena Iyengar 00:23:24 I doubt that'll be true. I think it'll just mean that you're going to do a lot more in the same amount of time. I mean, did the car really, like how did the car change our lives? Well, you now could own a house because you could live further away. Nice. Did it actually reduce the amount of time you work? No. Clare Duffy 00:23:47 Yeah. Sheena Iyengar 00:23:50 Not a bigger house. Clare Duffy 00:23:52 The expectation is that you can go farther for work. Sheena Iyengar 00:23:53 Same thing happened with Zoom. Clare Duffy 00:23:55 Yeah. Sheena Iyengar 00:23:56 You're still working. If anything, you might even be working longer hours. They made it easier for you to get out of bed and get to work. Clare Duffy 00:24:03 Yeah, it all turns in that direction, unfortunately. Well, Sheena, thank you so much for doing this. Really appreciate it. Sheena Iyengar 00:24:11 Thank you. Clare Duffy 00:24:14 'So if you're concerned about how AI-generated art is going to impact human artists or human expression, here are a few things to keep in mind based on my conversation with Sheena. According to Sheena's study, people still tend to value art that's fully made by humans more than AI-generated art, even though it can sometimes be hard to tell the two apart. So it seems like human creativity still carries that extra spark. We'll link to Sheena and her team's research in the show notes. Remember that even AI-generated art typically has some kind of human intention behind it, even if it's something as simple as a text prompt. As far as honoring artists' work and making it clear when an AI tool is drawing from existing art, that's an ongoing conversation in boardrooms and courtrooms right now. And if you want to support human artists, consider buying their art or going to see it in person, in a gallery or museum. Thanks for tuning in to today's episode of Terms of Service. I'm Clare Duffy, catch you next week.


The Verge
19-06-2025
- Entertainment
- The Verge
AI residencies are trying to change the conversation around artificial art
At a recent exhibition in Copenhagen, visitors stepped into a dark room and were met by an unusual host: a jaguar that watched the crowd, selected individuals, and began to share stories about her daughter, her rainforest, and the fires that once threatened her home — the Bolivian Amazon. The live interaction with Huk, an AI-driven creature, is tailored to each visitor based on visual cues. Bolivian Australian artist Violeta Ayala created the piece during an arts residency at Mila, one of the world's leading AI research centers. These residencies, usually hosted by tech labs, museums, or academic centers, offer artists access to tools, compute, and collaborators to support creative experimentation with AI. 'My goal was to build a robot that could represent something more than human; something incorruptible,' Ayala says. Ayala's jaguar is a clever use of early AI, but it is also emblematic of a wider movement: a fast-growing crop of artist residencies that put AI tools directly in creators' hands while shaping how the technology is judged by audiences, lawmakers, and courts. Residencies like these have expanded rapidly in recent years, with new programs emerging across Europe, North America, and Asia — like the Max Planck Institute and the SETI Institute programs. Many technologists describe them as a form of soft power. Pieces by artists who have participated in AI art residencies have been featured in galleries such as the Museum of Modern Art in New York and Centre Pompidou in Paris. One of the newest programs was started by Villa Albertine, the French American cultural organization. In early 2025, the organization created a dedicated AI track, adding four new residents per year to the 60 artists, thinkers, and creators it hosts annually. The initiative was announced at an AI summit in Paris with French Minister of Culture Rachida Dati and backed by Fidji Simo, OpenAI's CEO of applications. 'We're not choosing sides so much as opening space for inquiry,' says Mohamed Bouabdallah, Villa Albertine's director. 'Some residents may critique AI or explore its risks.' In 2024, Villa Albertine also hosted a summit called Arts in the Age of AI, drawing more than 500 attendees and participants from OpenAI, Mozilla, SAG-AFTRA, and both US and French copyright offices, according to Bouabdallah. Bouabdallah says these programs are designed to 'select the artist, not just their work.' They provide artists with the time and resources needed to explore art projects that use AI. 'Even if someone uses AI extensively, they must articulate their intent. It's not just about output—it's about authorship.' As he puts it, 'The tool must be behind the human.' This kind of cultural framing is meant to promote artistic production, but it can also influence how AI is viewed by the public, pushing back on the often negative perception around AI art. 'An AI developer might want to change minds about what's legitimate by packaging the use of AI in a form that resembles traditional artistic practice,' says Trystan Goetze, an ethicist and director at Cornell University. 'That could make it seem more acceptable.' 'The real value here is giving artists the space to grapple with that themselves.' Residencies may support specific artists, but they don't address the broader concerns around AI art. 'Changing the context from random users prompting models in Discord to formal residencies doesn't alter the core issues,' Goetze says. 'The labor is still being taken.' These legal questions around authorship and compensation remain unresolved. In the US, class-action lawsuits by artists against Stability AI, Midjourney, and others are testing whether generative models trained on copyrighted work constitute fair use. Courts will decide these questions, but public sentiment may shape the boundaries: if AI-generated art is culturally perceived as derivative or exploitative, it becomes harder to defend its legitimacy in policy or law, and vice versa. A similar dynamic played out over a century ago. In 1908, the US Supreme Court ruled that piano rolls, then a new format for reproducing music, were not subject to copyright, because they weren't readable by the human eye. Widespread backlash from musicians, publishers, and the public spurred Congress to pass the 1909 Copyright Act, introducing a compulsory licensing system that required payment for mechanical reproductions. 'These models do have a recognizable aesthetic,' Goetze says. 'The more we're exposed to these visuals, the more 'normal' they might seem.' That normalization, he speculates, might soften resistance not just to AI art but also to AI in other domains. 'There's always been debate around inspiration versus plagiarism,' Bouabdallah says. 'The real value here is giving artists the space to grapple with that themselves.' Ayala argues that 'the problem is not that AI copies — humans copy constantly — it's that the benefits are not distributed equally: the big companies benefit most.' Despite those challenges, Ayala sees residencies as important sites of experimentation. 'We can't just critique that AI was built by privileged men, we have to actively build alternatives,' she says. 'It's not about what I want AI to be: it already is what it is. We're transitioning as a species in how we relate, remember, and co-create.'


The Verge
18-06-2025
- Entertainment
- The Verge
What happens when you feed AI nothing
If you stumbled across Terence Broad's AI-generated artwork (un)stable equilibrium on YouTube, you might assume he'd trained a model on the works of the painter Mark Rothko — the earlier, lighter pieces, before his vision became darker and suffused with doom. Like early-period Rothko, Broad's AI-generated images consist of simple fields of pure color, but they're morphing, continuously changing form and hue. But Broad didn't train his AI on Rothko; he didn't train it on any data at all. By hacking a neural network, and locking elements of it into a recursive loop, he was able to induce this AI into producing images without any training data at all — no inputs, no influences. Depending on your perspective, Broad's art is either a pioneering display of pure artificial creativity, a look into the very soul of AI, or a clever but meaningless electronic by-product, closer to guitar feedback than music. In any case, his work points the way toward a more creative and ethical use of generative AI beyond the large-scale manufacture of derivative slop now oozing through our visual culture. Broad has deep reservations about the ethics of training generative AI on other people's work, but his main inspiration for (un)stable equilibrium wasn't philosophical; it was a crappy job. In 2016, after searching for a job in machine learning that didn't involve surveillance, Broad found employment at a firm that ran a network of traffic cameras in the city of Milton Keynes, with an emphasis on data privacy. 'My job was training these models and managing these huge datasets, like 150,000 images all around the most boring city in the UK,' says Broad. 'And I just got so sick of managing datasets. When I started my art practice, I was like, I'm not doing it — I'm not making [datasets].' Legal threats from a multinational corporation pushed him further away from inputs. One of Broad's early artistic successes involved training a type of artificial neural network called an autoencoder on every frame of the film Blade Runner (1982), and then asking it to generate a copy of the film. The result, bits of which are still available online, are simultaneously a demonstration of the limitations, circa 2016, of generative AI, and a wry commentary on the perils of human-created intelligence. Broad posted the video online, where it soon received major attention — and a DMCA takedown notice from Warner Bros. 'Whenever you get a DMCA takedown, you can contest it,' Broad says. 'But then you make yourself liable to be sued in an American court, which, as a new graduate with lots of debt, was not something I was willing to risk.' When a journalist from Vox contacted Warner Bros. for comment, it quickly rescinded the notice — only to reissue it soon after. (Broad says the video has been reposted several times, and always receives a takedown notice — a process that, ironically, is largely conducted via AI.) Curators began to contact Broad, and he soon got exhibitions at the Whitney, the Barbican, Ars Electronica, and other venues. But anxiety over the work's murky legal status was crushing. 'I remember when I went over to the private view of the show at the Whitney, and I remember being sat on a plane and I was shitting myself because I was like, O h, Warner Bros. are going to shut it down,' Broad recalls. 'I was super paranoid about it. Thankfully, I never got sued by Warner Bros., but that was something that really stuck with me. After that, I was like, I want to practice, but I don't want to be making work that's just derived off other people's work without their consent, without paying them. Since 2016, I've not trained a sort of generative AI model on anyone else's data to make my art.' In 2018, Broad started a PhD in computer science at Goldsmiths, University of London. It was there, he says, that he started grappling with the full implications of his vow of data abstinence. 'How could you train a generative AI model without imitating data? It took me a while to realize that that was an oxymoron. A generative model is just a statistical model of data that just imitates the data it's been trained on. So I kind of had to find other ways of framing the question.' Broad soon turned his attention to the generative adversarial network, or GAN, an AI model that was then much in vogue. In a conventional GAN, two neural networks — the discriminator and the generator — combine to train each other. Both networks analyze a dataset, and then the generator attempts to fool the discriminator by generating fake data; when it fails, it adjusts its parameters, and when it succeeds, the discriminator adjusts. At the end of this training process, tug-of-war between discriminator and generator will, theoretically, produce an ideal equilibrium that enables this GAN to produce data that's on par with the original training set. Broad's eureka moment was an intuition that he could replace the training data in the GAN with another generator network, loop it to the first generator network, and direct them to imitate each other. His early efforts led to mode collapse and produced 'gray blobs; nothing exciting,' says Broad. But when he inserted a color variance loss term into the system, the images became more complex, more vibrant. Subsequent experiments with the internal elements of the GAN pushed the work even further. 'The input to [a GAN] is called a latent vector. It's basically a big number array,' says Broad. 'And you can kind of smoothly transition between different points in the possibility space of generation, kind of moving around the possibility space of the two networks. And I think one of the interesting things is how it could just sort of infinitely generate new things.' Looking at his initial results, the Rothko comparison was immediately apparent; Broad says he saved those first images in a folder titled 'Rothko-esque.' (Broad also says that when he presented the works that comprise (un)stable equilibrium at a tech conference, someone in the audience angrily called him a liar when he said he hadn't input any data into the GAN, and insisted that he must've trained it on color field paintings.) But the comparison sort of misses the point; the brilliance in Broad's work resides in the process, not the output. He didn't set out to create Rothko-esque images; he set out to uncover the latent creativity of the networks he was working with. Did he succeed? Even Broad's not entirely sure. When asked if the images in (un)stable equilibrium are the genuine product of a 'pure' artificial creativity, he says, 'No external representation or feature is imposed on the networks outputs per se, but I have speculated that my personal aesthetic preferences have had some influence on this process as a form of 'meta-heuristic.' I also think why it outputs what it does is a bit of a mystery. I've had lots of academics suggest I try to investigate and understand why it outputs what it does, but to be honest I am quite happy with the mystery of it!' Talking to him about his process, and reading through his PhD thesis, one of the takeaways is that, even at the highest academic level, people don't really understand exactly how generative AI works. Compare generative AI tools like Midjourney, with their exclusive emphasis on 'prompt engineering,' to something like Photoshop, which allows users to adjust a nearly endless number of settings and elements. We know that if we feed generative AI data, a composite of those inputs will come out the other side, but no one really knows, on a granular level, what's happening inside the black box. (Some of this is intentional; Broad notes the irony of a company called OpenAI being highly secretive about its models and inputs.) Broad's explorations of inputless output shed some light on the internal processes of AI, even if his efforts sometimes sound more like early lobotomists rooting around in the brain with an ice pick rather than the subtler explorations of, say, psychoanalysis. Revealing how these models work also demystifies them — critical at a time when techno-optimists and doomers alike are laboring under what Broad calls 'bullshit,' the 'mirage' of an all-powerful, quasi-mystical AI. 'We think that they're doing far more than they are,' says Broad. 'But it's just a bunch of matrix multiplications. It's very easy to get in there and start changing things.'


Geeky Gadgets
06-06-2025
- Entertainment
- Geeky Gadgets
AI Art That Sees, Hears and Reacts: The Future of Interactive Creativity
What if art could see you, hear you, and even respond to your presence? Imagine stepping into an installation where the colors shift with your movements, the sounds evolve based on your gestures, and the entire space seems alive, attuned to your every action. This isn't a scene from a sci-fi film—it's the reality of AI-powered art installations. By embedding artificial intelligence into the very fabric of artistic expression, creators are transforming passive displays into dynamic, interactive experiences. These installations don't just exist to be observed; they invite you to become an active participant, blurring the line between audience and artist. It's a bold reimagining of what art can be, and it's happening now. In this project Rootkid shows how the installations adapts in real time, using advanced algorithms to respond to movement, sound, and even emotion, creating a personalized experience that evolves with you. But this isn't just about technology—it's about the profound questions it raises. Can a machine truly create art, or is it merely an extension of human ingenuity? And what does it mean when the audience becomes part of the creative process? As we delve into this fusion of art and AI, prepare to rethink not only how you view art but also your role in shaping it. AI Transforming Artistic Expression How AI is Transforming Art The integration of AI into art installations represents a significant evolution in how art is both conceptualized and experienced. Unlike traditional static displays, AI-powered installations are dynamic, adapting in real time to their environment and audience. Advanced machine learning algorithms analyze various inputs—such as movement, sound, or even emotional cues—and adjust the artwork accordingly. For example, as you move through the space, the installation might alter its visuals, soundscapes, or even textures, creating a personalized and immersive experience that evolves with your presence. This transformation is not limited to visual or auditory elements. AI enables artists to explore new dimensions of creativity by generating patterns, forms, and interactions that were previously unattainable. The result is a form of art that is not only reactive but also deeply engaging, offering you a unique encounter every time you engage with it. From Viewer to Participant: Interactive Art One of the most compelling aspects of AI-driven art is its ability to transform you from a passive observer into an active participant. Through the use of sensors, cameras, and other responsive tools, these installations react to your actions, fostering a deeper connection between you and the artwork. This interactivity challenges the traditional one-way relationship between art and its audience, creating a collaborative experience where your presence directly influences the outcome. Imagine walking through an installation where your movements dictate the colors, shapes, or sounds that emerge. Your gestures might trigger cascading lights, shifting patterns, or evolving soundscapes, making you an integral part of the creative process. This level of engagement not only enhances your experience but also redefines the role of the audience in the artistic journey. By interacting with the installation, you contribute to its narrative, blurring the lines between creator and participant. AI is Turning Art Into a Living, Breathing Experience Watch this video on YouTube. Unlock more potential in Artificial Intelligence (AI) by reading previous articles we have written. AI as a Creative Partner Artificial intelligence introduces a new dimension to the creative process, acting as both a tool and a collaborator. Unlike traditional methods where the artist solely dictates the outcome, AI brings its own capabilities to the table, processing vast amounts of data, identifying patterns, and generating novel artistic outputs. In these installations, AI responds to your interactions while contributing its own creative elements, resulting in a partnership that combines human ingenuity with machine intelligence. This collaboration demonstrates how AI can enhance, rather than replace, traditional artistic practices. By using AI's ability to analyze and adapt, artists can push the boundaries of their creativity, exploring new forms of expression that were previously unimaginable. For you, this means encountering art that is not only visually and intellectually stimulating but also deeply personal and interactive. Redefining Artistic Boundaries The incorporation of AI into art installations challenges conventional definitions of art and authorship. The final output is shaped not only by the artist's vision but also by the AI's autonomous contributions and your interactions. This raises thought-provoking questions about the nature of creativity and authorship: Can a machine truly create art, or is it merely executing programmed instructions? What role do you, as the audience, play in shaping the final piece? These questions encourage you to reflect on the evolving relationship between art and technology. By blurring the lines between human and machine creativity, these installations redefine what it means to create and experience art. The collaborative process between artists, AI, and audiences opens up new possibilities for storytelling, experimentation, and engagement, paving the way for a more inclusive and dynamic artistic landscape. A Dynamic, Ever-Changing Experience One of the most captivating aspects of AI-driven art is its ability to evolve in real time. These installations are not static objects but living entities that adapt to your presence and actions. Whether through shifting visuals, adaptive soundscapes, or other interactive elements, the artwork continuously transforms, making sure that no two experiences are ever the same. This dynamic quality invites repeated engagement, as each interaction offers a new perspective or narrative. For you, this means that the installation becomes more than just a piece of art—it becomes an ongoing dialogue, a story that unfolds differently with every encounter. This ever-changing nature not only holds your attention but also highlights the potential of AI to create art that is both innovative and deeply engaging. Pioneering Techniques in Modern Art The use of AI in art installations exemplifies the innovative techniques shaping contemporary art. Artists are increasingly using technology to explore new mediums and methods of expression, pushing the boundaries of what is possible. These installations serve as a prime example of how AI can be used to create works that are not only visually striking but also intellectually and emotionally resonant. By merging art and technology, these installations open up new possibilities for storytelling, audience interaction, and creative experimentation. For you, this means experiencing art that is more immersive, personalized, and thought-provoking than ever before. As AI continues to evolve, its role in the art world is likely to expand, offering new opportunities for collaboration and innovation. The Future of Art and AI Collaboration As AI technology advances, its potential to transform the art world becomes increasingly apparent. The collaboration between artists and AI has the power to unlock unprecedented forms of creativity, resulting in works that are more interactive, personalized, and engaging. For you, this means encountering art in ways that were once unimaginable, where your presence and actions play a central role in shaping the experience. The integration of AI into art installations is just the beginning of a broader movement toward a more technologically enriched artistic landscape. By combining the strengths of human creativity with the capabilities of AI, artists can explore new frontiers of expression, creating works that challenge, inspire, and captivate. This fusion of art and technology not only enriches the artistic experience but also paves the way for future innovations in AI-driven creativity. Media Credit: Rootkid Filed Under: AI, DIY Projects Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


Forbes
30-05-2025
- Entertainment
- Forbes
‘Escape From Tarkov' Players Angry After AI Art Found In New Map
The new Iceberg map is proving controversial. Earlier this week a new map launched in Escape From Tarkov Arena called Iceberg, however fans have since discovered that it features AI generated artwork, and as you might expect, a lot of them are not happy. Yesterday, patch 0.3.1 for Escape From Tarkov Arena was launched, and the headline new addition was a new map for the Blast Gang and CheckPoint game modes called Iceberg, which is set in a luxury hotel that is now being used for the combat arena sport. By most accounts, it's a decent map, with a lot of players seeming to have fun in the first day of action. However, fans have now discovered some AI generated artwork on the map, and quite a few are angry and disappointed about its inclusion. The top thread on the Escape From Tarkov sub-Reddit is currently highlighting some of the AI art, and asking the developers to not use it again in either Arena or the main Tarkov game. Throughout the Iceberg map, there are posters on the walls that parody other games. Some examples include parodies of the iconic Dark Souls graphic of a character next to a bonfire, the key art of the recently released Kingdom Come Deliverance 2 and artwork that appears to be referencing Steam hit Lethal Company. Many fans on the sub-Reddit theorised these images were made with AI, and now a member of the Escape From Tarkov PR team has confirmed to me that that is the case and these images were made with AI. As is the case with a lot of AI work, there has been somewhat of a backlash from areas of the Tarkov community, with some fans calling the use of AI lazy and many sharing their disappointment that developers Battlestate Games has chosen to use this technology instead of commissioning human artists. However, others are arguing that this is a good use of AI, with features of the map that are inconsequential to gameplay and something that will barely be noticed by most players. FEATURED | Frase ByForbes™ Unscramble The Anagram To Reveal The Phrase Pinpoint By Linkedin Guess The Category Queens By Linkedin Crown Each Region Crossclimb By Linkedin Unlock A Trivia Ladder Had these images not been made by AI these would have been some very cool Easter eggs for players to discover on the new map, but now they are creating somewhat of a negative storm, which is a common occurrence in the world of Tarkov. It's only been a year or so since the extremely controversial Unheard Edition of Tarkov that saw a lot of fans pledge to never come back to it. If you want to see the AI artwork on the Iceberg map yourself, then you are in luck, as Escape From Tarkov Arena is having a free weekend right now. You can play the game for free until 6:00 PM MSK on June 2, and there is a new task chain and double XP to enjoy when you do. If you want to purchase Arena after trying it, there is also a 20% discount for the duration of the weekend. If you would rather watch, then Twitch Drops for Tarkov Arena are currently live as well, so you can get some in game rewards to watching the action.