
This AI Tool Creates Images That Update While You Type. How to Use It
And then there's me, someone simply looking for a way to work with an AI tool to generate images in a way that felt somewhat ethical and also a bit more stimulating than typing one prompt and waiting for AI to deliver me something.
This is particularly what I enjoyed about Artbreeder, an AI-powered platform that creates and modifies images by blending traits from existing visuals.
What is Artbreeder and how does it use AI?
Created in 2018 -- and originally named Ganbreeder -- Artbreeder is a platform that uses generative adversarial networks that give you the ability to generate and adjust the image you've created. These "sliders" control attributes of an image, providing you with more contribution to the final image.
Artbreeder has four pricing tiers, including a free model and one that's priced up to $31/month. These tiers also range in the amount of credits available each year and paid plans include benefits like faster rendering and downloads, privacy controls and advanced features.
Artbreeder's text-to-image feature was added in late 2024, which is an example of how you can crossbreed two or more images while you describe the type of image you'd like to create.
Each model within Artbreeder has been trained on a curated dataset, in hopes of creating category or text-specific images. The intended outcome is that you're co-creating the art in real time. (Which I found much more interesting when inputting information.)
How to use Artbreeder to create images while you type
This is how to use Artbreeder's website for text-to-image creation:
Create a free account on Artbreeder. You can also upgrade if you'd like more credits. At the top of the webpage, Artbreeder has three tabs: Create, Community and Browse. Community and Browse will give you access to trending images and recent posts, while Community provides direct access to what other users are whipping up within the platform. In Create, there are various tools, including Animator, AI Director and Composer -- this is located in the upper right hand part of the screen and where you can work with text-to-image generation. You'll see a button up top labeled Description. As you begin typing, the image below will begin to appear, shifting as you edit or add words to your description. I've tried multiple AI text-to-image generators, yet never witnessed the image being created in front of me! You also can upload personal photos to customize or evolve over time using Artbreeder's tools. At the bottom of the screen are multiple buttons with other features, including reading more about Composer, enhancing your image output and downloading an image to your desktop. Once your creation feels complete, you can also share your creations with others under the Community tab.
Should you use Artbreeder?
Images shift and develop as you continue typing your prompt.
Artbreeder / Screenshot by CNET
Artbreeder is an easy-to-use AI platform that has myriad features and tools. Its editing gives you more control and flexibility over image output -- and an ability to watch the process along the way.
Now, for anyone curious about Artbreeder's ethics, it's important to note that generated images are released under CC0 (Creative Commons Zero) which means that your creations are considered public domain.
Also, you can only upload public domain images or images of your own -- but the training of its datasets may contain copyrighted works (Artbreeder didn't immediately respond to a request for comment on this). This goes along with the long-standing debate around infringement vs. fair use. So while its user creation licensing may make it safer for reuse than others, there's still a gray area around content boundaries and image protection.
Essentially, Artbreeder is ideal for people in early creative stages, like those developing characters or looking to experiment with concepts quickly and visually. I can also see writers building fictional worlds or students learning about AI art will find it valuable.
Yet, professional designers needing more precision than its output and creators concerned about training data ethics may find it too limited or risky. It's a great tool for exploration and inspiration but not always the final destination for high-level work or premium output. But it does provide you with vivid output and keeps other AI tools or ways to experiment with them readily available.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
a minute ago
- Bloomberg
OpenAI Releases More Powerful GPT-5 Model
Good morning. OpenAI unveils its long-awaited GPT-5 model. Little-known Tokyo Electron is swept up in the fallout from TSMC's chip tech theft scandal. And Doodlemania takes over the world. Listen to the day's top stories. It's finally here. OpenAI's hotly anticipated GPT-5 began rolling out on Thursday as the startup aims to stay ahead of rising competition from rivals in the US and China. The latest model—which CEO Sam Altman said made him feel 'useless relative to the AI'—is designed to be more capable at coding and creative writing as well as reasoning through complex queries.

Wall Street Journal
a minute ago
- Wall Street Journal
Industrial Policy and the Tragedy of Intel
Is Intel finally discovering the wages of industrial policy? See President Trump's demand Thursday that CEO Lip-Bu Tan resign. Mr. Trump likes to boss companies around, but Intel has made itself especially vulnerable owing to its dependence on federal largesse. 'The CEO of INTEL is highly CONFLICTED and must resign, immediately,' Mr. Trump wrote Thursday on Truth Social. What provoked the President's ire? Hard to tell, but Arkansas Sen. Tom Cotton on Tuesday sent Intel a letter expressing concern 'about the security and integrity of Intel's operations and its potential impact on US national security.'


Forbes
a minute ago
- Forbes
GPT-5 Is Launched But Set Aside Your Expectations That This Was Going To Be Either AGI Or Artificial Superintelligence Since It Clearly Isn't
In today's column, I examine the newly launched GPT-5 from OpenAI, considered the successor to their prior generative AI and large language models (LLMs), which has been a long-awaited release that spurred tremendous speculation and breathless anticipation. The bottom line is that though this is an impressive version and provides notable improvements, it isn't artificial general intelligence (AGI). You might have had valid reasons to assume that GPT-5 was going to be AGI since we have been teased over several years that GPT-5 was going to move mountains and finally attain true, across-the-board human-level intelligence, see my coverage and analyses at the link here and the link here on such allusions. Well, it turns out that's not happening today. We mainly have a somewhat better generative AI that gets accolades for various notable advances and enhancements. That's worth a hearty cheer. By and large, you will find it a useful and handy tool. Again, it isn't pinnacle AI by any stretch of imagination. Also, if you were eagerly expecting that this might be a harbinger of the next step, artificial superintelligence (ASI), sorry, but that's a no-go. There isn't anything of an evidentiary nature that proves this to be the case. ASI remains elusive, beyond our reach, and maybe, someday, we will consider GPT-5 to be a part of the historical path there. Or we will realize that GPT-5 was merely one of many AIs that happened to be devised and released from time and time but had little to do with attaining actual ASI. Let's talk further about GPT-5, including what it is and what it is not. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Official Launch Of GPT-5 In addition to a live video-streamed presentation with Sam Altman and his team touting GPT-5's release, there are various technical docs posted at the OpenAI website describing GPT-5. I'd suggest that any AI developer or similar technical specialist would benefit from exploring the GPT-5 System Card, a semi-technical depiction of the overall nuts-and-bolts of GPT-5. One key aspect is that GPT-5 is essentially a wraparound of several new GPT-5 submodels that are reflective of prior versions of OpenAI's line of products. Allow me to explain this since it is a crucial point. You might know that there has been an organic expansion of OpenAI's prior models in the sense that there have been GPT-4o, GPT-4o-mini, OpenAI o3, OpenAI o4-mini, GPT-4.1.-nano, and so on. When you wanted to use OpenAI's AI capabilities, especially from an AI developer's or AI devotee's perspective, you had to select which of those available models you wanted to utilize. It all depended on what you were looking to do. Some were faster, some were slower. Some were deeper at certain classes of problems, others were shallower. It was a smorgasbord that required you to pick the right one as suitable for your task at hand. The onus was on you to know which of the models were particularly applicable to whatever you were trying to do. It could be a veritable hit-and-miss process of selection and tryouts. GPT-5 now has uplifted those prior versions into new GPT-5 submodels, and the overarching GPT-5 model makes the choice of which GPT-5 submodel might be best for whatever problem or question you happen to ask. I'll comment on this umbrella or wraparound approach in a moment, so hang in there. According to the official OpenAI blog posting of August 7, 2025, entitled 'Introducing GPT-5,' here is what GPT-5 is principally about (excerpts): Upside And Downside Of Wraparound Consider the heralded wraparound aspect and what it foretells. Let's use an analogy to provide insight. When you drive a car, most people prefer to have an automatic transmission that selects which gear the car should be in. It used to be that having a manual transmission was the norm. Gradually, automatic transmissions became the new norm. You could liken the GPT-5 model to an automatic transmission that determines which GPT-5 submodel (a gear, per my analogy) is the right one to use. I would wager that everyday users of GPT-5 will be generally satisfied with that approach. The thing is, just as an automatic transmission might sometimes make a choice that isn't the best selection, the same can happen with the GPT-5 model choosing a GPT-5 submodel. The issue at hand is that if a GPT-5 submodel is selected and run in this automatic transmission mode, but it turns out not to be the best choice, it is going to run anyway. If you are paying for your AI usage, you'll have to pay for the run-time that might not produce as solid an answer as you hoped for. Furthermore, you might not even realize that you were inadvertently shortchanged by the fact that a less-than-best choice was made of which submodel to run. The odds are that you'll just assume that the answer produced was the best possible achievement. Meanwhile, if you perchance tried one of the GPT-5 submodels directly that you thought might do a better job, similar to selecting a specific gear when your car is going up a hill or on a straightaway, a better result might have arisen. It will be interesting to see how the auto-switching feature of GPT-5 progresses over time. The upside right now is that you no longer need to make the arduous or agonizing choice of which submodel to use. The downside is that the auto-switcher could make a suboptimal choice on your behalf. So-Called Thinking Time Aspects Another vital aspect of using generative AI and LLMs is trying to decide how much run-time you want the AI to use when doing its processing. I've previously discussed that this is cringingly referred to as 'thinking time' by much of the AI industry. It is cringey because the word 'thinking' implies human thoughts and mental processing. That's an unfortunate and illegitimate form of anthropomorphizing AI. All that is happening is that you are allowing more computational processing time to occur. See my coverage at the link here. I don't equate that to the vaunted nature of 'thinking,' but it's what has become a popular way to express the matter. Sad face. I had all along said that asking users to decide how much run-time ought to occur is a tough consideration since we usually have no semblance of what amount of time is going to be suitable. It is often a purely wild guess. Unless you happen to know more about the inner workings of the AI, it is hard to gauge whether a little bit of added time or a lot of added time will be of value. Remember, too, that the additional processing time will cost you more and take longer to produce a result. GPT-5 has another kind of automatic transmission-like facet that tries to determine how long the processing should occur, depending on what you've asked the AI to do. We'll see how that turns out once people are pounding away at using GPT-5. It could be that tons of unnecessary added run-time occurs. Or it could be that a kind of optimal amount of run-time occurs. Benchmarks Are Shiny Much of what you'll first hear or read about GPT-5 is its performance on a variety of AI industry benchmarks. The benchmark performance seems to have gone well. That was expected. While some will gush at the performance, I suppose it depends on what your expectations consist of. If you are willing to simply compare to what the prior performance was, and what the performance of other LLMs is, you would certainly have cause to be joyful. On the other hand, if your mindset was that we are supposed to be nearing AGI, the performance is quite a letdown in that regard. Do not fall for those zany claims that the benchmarks showcase that we are on the cusp of AGI. As I have noted in my prior analyses, these prevailing benchmarks are marginally about AGI; see my discussion at the link here. We need to up the ante if we are seriously aiming at AGI. Speaking of which, Sam Altman stated that GPT-5 is a significant step along the path to AGI, but also readily acknowledged that GPT-5 lacks key traits required to attain AGI. For example, he mentioned that GPT-5 cannot learn on its own on a continuous basis after being initially put into active production. It is generally believed and assumed that an essential ingredient or aspect of AGI is the ability for AI to self-learn, see my explanation at the link here. Coding And Writing Are Boosted GPT-5 appears to be better at producing programming code than its prior models. In case you didn't already know, an increasingly popular use of generative AI consists of 'vibe coding,' whereby you tell the AI what kind of program you want to produce, and the AI proceeds to generate the source code for the program. This is the dream that has been sought since the first days of computer programming, namely that you could one day specify in natural language, such as English, what you want a program to do, and the code will be automatically generated for it. There are still lots of hiccups and gotchas associated with generating program code via generative AI and LLMs. Sometimes the code contains bugs. Sometimes the code only partially does what you had in mind. Sometimes the code does more than what you asked for, which can be troubling. And so on. In any case, GPT-5 seems to have new improvements in being able to debug code and also does better at creating interfaces and the front-end of programs. I'm sure we will soon be reading about the limitations and gotchas once erstwhile vibe coders get their hands on GPT-5 and kick the tires accordingly. On the writing side of things, it appears that GPT-5 has improvements in a myriad of writing aspects. The ability to generate poems is apparently enhanced. Depth of writing and the AI being able to make more compelling stories and narratives seems to be an added plus. My guess is that the everyday user won't discern much of a difference. Only those who were seeking a heightened capability of composing responses will probably realize that the AI is doing a better writing job now. Honesty, Hallucinations, And Sycophancy Here are three additional elements of GPT-5 that caught my attention (excerpts from the OpenAI blog cited above): The first and second points indicate that GPT-5 is more honest than prior OpenAI models, plus it is less likely to hallucinate (hallucination is yet another misappropriated word used in the AI field to describe when the AI produces fictionalized responses that have no bearing in fact or truth). I suppose it might come as a shock to some people that AI has been and continues to lie to us, see my discussion at the link here. I would assume that many people have heard or even witnessed that AI can make things up, i.e., produce an AI hallucination. Worries are that AI hallucinations are so convincing in their appearance of realism, and the AI has an aura of confidence and rightness, that people are misled into believing false statements and, at times, embrace its crazy assertions. See more at the link here. The good news is that apparently GPT-5 reduces the lying and reduces the AI hallucinations. The bad news is that it isn't zero. In other words, we might be overly wishful, but it would presumably be nice to have AI that doesn't lie and AI that doesn't hallucinate. You can spin the prevailing wonderment that the chances of those happening are being lessened, which is certainly nice and welcomed, but the bottom line is that it is still happening. Remain wary and alert. Where Are AGI And ASI Go ahead and have fun with GPT-5 and try out the latest in generative AI and LLMs from OpenAI. There is plenty there, and many users will be pleased with the new capabilities. Grab yourself a glass of fine wine and relish the ever-advancing progress of interactive AI. Not wanting to seem melancholy, but amidst all the pomp and circumstance, it is clear that we aren't at AGI, and not even close to arriving at ASI. Some wonder if we are on the right path. Perhaps the AI industry has become myopic and is focusing on the same solutions since everyone else is pursuing the same line of thought. It might require a completely outside-the-box way of thinking to reconceive how we build AI and what AI architecture needs to be to ultimately arrive at AGI and ASI (see some intriguing considerations at the link here). I'll end for now on an upbeat note. Carl Sagan famously stated: 'Somewhere, something incredible is waiting to be known.' From the looks of things, that remark aptly applies to achieving AGI and ASI.