logo
#

Latest news with #StableDiffusion

Reports indicate a massive uptick in AI-generated CSAM throughout the internet
Reports indicate a massive uptick in AI-generated CSAM throughout the internet

Engadget

time10-07-2025

  • Engadget

Reports indicate a massive uptick in AI-generated CSAM throughout the internet

Internet Watch Foundation AI-generated child sexual abuse material (CSAM) has been flooding the internet, according to a report by The New York Times . Researchers at organizations like the Internet Watch Foundation and the National Center for Missing & Exploited Children are warning that this new AI-created CSAM is nearly indistinguishable from the real thing. Let's go over some numbers. The Internet Watch Foundation, a nonprofit that investigates and collects reports of CSAM , has identified 1,286 AI-generated videos so far this year. This is compared with just two videos identified in the first half of 2024. That's an exponential increase. To view this content, you'll need to update your privacy settings. Please click here and view the "Content and social-media partners" setting to do so. 🔎 Developments in artificial intelligence (AI) come with a range of benefits, including supporting learning and innovation. There is, however, growing concern for how AI can also be misused to create and share child sexual abuse material (CSAM), referred to as AI-CSAM. In… — Internet Watch Foundation (IWF) (@IWFhotline) July 8, 2025 The National Center for Missing & Exploited Children re-affirms those statistics. It told NYT that it has received 485,000 reports of AI-generated CSAM, including still images and videos, in the first half of 2025. This is compared to 67,000 for all of 2024. That's another massive uptick 'It's a canary in the coal mine,' said Derek Ray-Hill, interim chief executive of the Internet Watch Foundation. 'There is an absolute tsunami we are seeing.' This technology is constantly improving, so the videos and images have become more realistic. The Internet Watch Foundation found an internet forum in which users were praising how realistic the new videos were. Reporting suggests that this content is distributed through the dark web, making it harder for law enforcement agencies to identify the offenders. It's worth remembering how AI image generators work. They are trained using real images and videos. The New York Times says that much of this new glut of AI-generated content includes real CSAM that has been repurposed by the algorithm. Some of the material even uses real photos of children scraped from school websites and social media. The issue dates back to the early days of this technology. In 2023, researchers at the Stanford Internet Observatory found hundreds of examples of CSAM in a data set used in an early version of the image generator Stable Diffusion. Stability AI says it has introduced safeguards to improve safety standards and "is deeply committed to preventing the misuse of our technology, particularly in the creation and dissemination of harmful content, including CSAM." That did lead other companies to start reporting AI-generated CSAM to the National Center for Missing & Exploited Children. Amazon reported a whopping 380,000 instances of AI-generated CSAM in the first half of this year, all of which it took down. OpenAI reported 75,000 cases . To view this content, you'll need to update your privacy settings. Please click here and view the "Content and social-media partners" setting to do so. NCMEC Applauds the California State Legislature for Passing AB 1831 and looks forward to it being signed into law. NCMEC supports AB 1831 because it addresses gaps in California's legal remedies for child victims of Generative AI CSAM. We are heartened to see states move… — National Center for Missing & Exploited Children (@NCMEC) September 4, 2024 Courts have been slow to catch up with this tech. The DOJ made its first known arrest last year of a man suspected of possessing and distributing AI-generated CSAM . A UK man recently got 18 months in jail for using AI to generate the foul images, which he sold. 'The Department of Justice views all forms of AI-generated CSAM as a serious and emerging threat,' Matt Galeotti, head of the Justice Department's criminal division, told NYT . It's worth noting that despite the alarming uptick in occurrences, AI-generated content still represents a mere fraction of all CSAM identified by authorities and watchdog organizations. For instance, the Internet Watch Foundation confirmed 291,273 reports of CSAM in 2024 and, as previously noted, just two instances were AI-generated.

In test-obsessed South Korea, AI boom arrives in exams, ahead of the technology itself
In test-obsessed South Korea, AI boom arrives in exams, ahead of the technology itself

Straits Times

time10-07-2025

  • Business
  • Straits Times

In test-obsessed South Korea, AI boom arrives in exams, ahead of the technology itself

Sign up now: Get ST's newsletters delivered to your inbox People signing up for artificial intelligence certifications tend to be those anxious about keeping up in an AI-driven world. SEOUL - A wave of artificial intelligence certifications has flooded the market in South Korea over the past two years. But according to government data, most of these tests exist only on paper, and have never been used by a single person. As of July 9 , there were 505 privately issued AI-related certifications registered with the Korea Research Institute for Professional Education and Training, a state-funded body under the Prime Minister's Office. This is nearly five times the number recorded in 2022, before tools like ChatGPT captured global attention. But more than 90 per cent of those certifications had zero test-takers as of late 2024 , the institute's own data shows. Many of the credentials are loosely tied to artificial intelligence in name only. Among recent additions are titles like AI Brain Fitness Coach, AI Art Storybook Author, and AI Trainer, which often have no connection to real AI technology. Only one of the 505 AI-related certifications — KT's AICE exam — has received official recognition from the South Korean government. The rest have been registered by individuals, companies, or private organisations, with no independent oversight or quality control. In 2024, just 36 of these certifications held any kind of exam. Only two had more than 1,000 people apply. Fourteen had a perfect 100 per cent pass rate. And 20 were removed from the registry that same year. For test organisers, the appeal is often financial. One popular certification that attracted around 500 candidates in 2024 charged up to 150,000 won ($110) per person, including test fees and course materials. The content reportedly consisted of basic instructions on how to use existing tools like ChatGPT or Stable Diffusion. Some issuers even promote these credentials as qualifications to teach AI to students or the general public. The people signing up tend to be those anxious about keeping up in an AI-driven world. A survey released this week by education firm Eduwill found that among 391 South Koreans in their 20s to 50s, 39.1 per cent said they planned to earn an AI certificate to prepare for the digital future. Others (27.6 per cent) said they were taking online AI courses or learning how to use automation tools like Notion AI. Industry insiders warn that most of these certificates hold little value in the job market. A local AI industry official told The Korea Herald that these credentials are often 'window dressing' for resumes. 'Most private AI certifications aren't taken seriously by hiring managers,' he said. 'Even for non-technical jobs like communications or marketing, what matters more is whether someone actually understands the AI space. That can't be faked with a certificate.' THE KOREA HERALD/ASIA NEWS NETWORK

In test-obsessed Korea, AI boom arrives in exams, ahead of the technology itself
In test-obsessed Korea, AI boom arrives in exams, ahead of the technology itself

Korea Herald

time10-07-2025

  • Business
  • Korea Herald

In test-obsessed Korea, AI boom arrives in exams, ahead of the technology itself

Over 500 new AI certifications have sprung up in Korea in two years, but few are trusted or even taken A wave of artificial intelligence certifications has flooded the market in South Korea over the past two years. But according to government data, most of these tests exist only on paper, and have never been used by a single person. As of Wednesday, there were 505 privately issued AI-related certifications registered with the Korea Research Institute for Professional Education and Training, a state-funded body under the Prime Minister's Office. This is nearly five times the number recorded in 2022, before tools like ChatGPT captured global attention. But more than 90 percent of those certifications had zero test-takers as of late last year, the institute's own data shows. Many of the credentials are loosely tied to artificial intelligence in name only. Among recent additions are titles like "AI Brain Fitness Coach," "AI Art Storybook Author," and "AI Trainer," which often have no connection to real AI technology. Only one of the 505 AI-related certifications — KT's AICE exam — has received official recognition from the South Korean government. The rest have been registered by individuals, companies, or private organizations, with no independent oversight or quality control. In 2024, just 36 of these certifications held any kind of exam. Only two had more than 1,000 people apply. Fourteen had a perfect 100 percent pass rate. And 20 were removed from the registry that same year. For test organizers, the appeal is often financial. One popular certification that attracted around 500 candidates last year charged up to 150,000 won ($110) per person, including test fees and course materials. The content reportedly consisted of basic instructions on how to use existing tools like ChatGPT or Stable Diffusion. Some issuers even promote these credentials as qualifications to teach AI to students or the general public. The people signing up tend to be those anxious about keeping up in an AI-driven world. A survey released this week by education firm Eduwill found that among 391 South Koreans in their 20s to 50s, 39.1 percent said they planned to earn an AI certificate to prepare for the digital future. Others (27.6 percent) said they were taking online AI courses or learning how to use automation tools like Notion AI. Industry officials warn that most of these certificates hold little value in the job market. Jeong Sung-hoon, communications manager at Seoul-based AI startup Wrtn, told The Korea Herald that these credentials are often "window dressing" for resumes. Wrtn ranked second in generative AI app usage among Koreans under 30 this March, according to local mobile analytics firm Wiseapp. 'Most private AI certifications aren't taken seriously by hiring managers,' Jeong said. 'Even for non-technical jobs like communications or marketing, what matters more is whether someone actually understands the AI space. That can't be faked with a certificate.'

Stability AI Review: Stable Diffusion Is a Household Name in AI Images for a Reason
Stability AI Review: Stable Diffusion Is a Household Name in AI Images for a Reason

CNET

time01-07-2025

  • CNET

Stability AI Review: Stable Diffusion Is a Household Name in AI Images for a Reason

CNET's expert staff reviews and rates dozens of new products and services each month, building on more than a quarter century of expertise. 7.0 / 10 SCORE Stability AI/Stable Diffusion Pros Fast generation time Great editing tools Very creative Cons Complicated availability Too realistic product images Stability AI/Stable Diffusion 7/10 CNET Score If you've heard of AI image generation, you've probably heard of Stable Diffusion. Named for a family of AI creative models, the original Stable Diffusion model was released in 2022 as the result of a collaboration between researchers from Stability AI, Runway and the University of Munich, with support from European AI research and data nonprofits. It quickly found a loyal fanbase of AI enthusiasts who compared it to its main competitor at the time, Midjourney. In the years since its initial launch, tech giants including OpenAI, Adobe and Canva have all released their own popular AI image models. But Stable Diffusion models have one key difference from all the others: They're open source. Open-source AI models let anyone take a peek behind the scenes at how the model works and adapt them to their own purposes. That means there are a lot of different ways to use Stable Diffusion models. I'm not a coding wizard, so I opted not to license or download the models to run locally on my computer. A quick Google search brought up a lot of websites that host SD models, but I wanted the true Stable Diffusion experience. That led me to DreamStudio and Stable Assistant. Both of these are freemium web apps by Stability AI that let you easily create AI images, and I used both. Ultimately, I preferred Stable Assistant, but my experience using both programs showed me why Stable Diffusion models have stayed a household name, even as the people behind the models have had a rocky path. The images I created with Stability AI were creative and detailed. Where the company shines is in its editing capabilities. Stable Assistant has the most comprehensive, hands-on editing suite of any AI image generator I've tested, without the overwhelming, overly detailed nature of a Photoshop-like professional program. The Stable Image Ultra model is artistically capable, like Midjourney and If you're trying to decide between the three competitors, it's probably going to come down to cost and potential commercialization requirements. Stable Assistant is great for people who need to produce a lot of AI imagery quickly and for amateur creators looking to level up their skills and refine their design ideas. DreamStudio will remind you of a more traditional AI image generator, great for budget-conscious, occasional AI creators. For professional creators, Stable Diffusion models are capable, but businesses will need to worry about licensing requirements. Here's how the newest Stable Diffusion model, Stable Image Ultra, held up in my tests, including how well it matched my prompts, response speed and creativity. How CNET tests AI image generators CNET takes a practical approach to reviewing AI image generators. Our goal is to determine how good it is relative to the competition and which purposes it serves best. To do that, we give the AI prompts based on real-world use cases, such as rendering in a particular style, combining elements into a single image and handling lengthier descriptions. We score the image generators on a 10-point scale that considers factors such as how well images match prompts, creativity of results and response speed. See how we test AI for more. The easiest way to access Stable Diffusion models is through Stability AI's Stable Assistant and DreamStudio. After a free three-day trial, there are four subscription options for Stable Assistant: Standard ($9 a month for 900 credits), pro ($19 a month for 1,900 credits), plus ($49 a month for 5,500 credits) and premium ($99 a month for 12,000 credits). I used the lowest tier, and after generating 75 images, I still had about 418 credits left. You also get access to Stability's AI video, 3D model and audio models with these plans. You can also access Stable Diffusion models using DreamStudio. You can initially play around with 100 free credits, then you'll need to upgrade. You can get the basic plan for $12 a month (1,200 credits) or the plus plan for $29 a month (2,900 credits). Stability AI can use the information and files you provide in your prompts (inputs) and the results it generates (outputs) for training its AI, as outlined in the terms of service and privacy policy. You can opt out in Stable Assistant by going to Profile > Settings > Disable training and history. In Dream Studio, you can go Settings > User preferences > Training: Improve model for everyone and toggle that off. You can learn more about opting out in Stability's privacy center. How good are the images, and how well do they match prompts? Stability was able to create a variety of images in many different styles. I created dramatic fantasy scenes, cute cartoon dinosaurs and photorealistic forest landscapes, all of which the program handled well. It reminded me a lot of the quality of other art-centric AI programs like Midjourney and -- finely detailed and creative. It had decent prompt adherence, which means it produced the images I asked for. This is one of my favorite Stability AI images. My prompt was inspired by the song Doomsday by Lizzie McAlpine. Created by Katelyn Chedraoui using Stability AI Like a lot of AI companies, Stability struggles with coherent text generation. Even telling Stable Assistant exactly what words I wanted to appear on the image couldn't get them to always populate correctly. DreamStudio was better, but the text was still childlike and didn't match the images' aesthetic. Stability also produced some of the most convincing AI images of products I've seen, second only to OpenAI. I asked Stability to create stock imagery for an iPhone, a pair of Ray-Ban sunglasses and a Hydroflask water bottle, and the results were surprisingly realistic. If you don't look too closely, these all look like they could be on each retailer's website. Created by Katelyn Chedraoui using Stability AI Requests for brand names, logos and celebrities' likenesses are typically shot down by AI image generators since they're protected content or sometimes go against a company's AI usage guidelines. I asked the chatbot if it was allowed to create brand names and logos. It replied: "I can create images that resemble well-known products and logos, but I cannot create exact replicas of copyrighted or trademarked materials." I was surprised not just to have my prompts with brand names go ahead, but for the results to be so good. One reason it may be able to produce these results is because of its training data and processes. Like the majority of AI companies, Stability's training datasets aren't public. Stability is currently being sued in a class action lawsuit where artists allege the company is infringing on their copyrighted work. Getty Images is also suing Stability, alleging that the company used 12 million photos from its collection without permission or payment. I strongly advise you not to create AI images that could potentially infringe on copyrighted material or replicate a real person's likeness. How engaging are the images? The images were engaging and often colorfully vivid. Using the upscaling tool was helpful for refining small details and making images more engaging. Images made with Stable Assistant and DreamStudio aren't watermarked, so make sure you disclose their AI origins when you share them. Can you fine-tune results? The best part of using Stability is its many editing tools. Its chatbot Stable Assistant has the most editing controls of any AI creative program I've tested, which is saying something. All the usual suspects were present in Stable Assistant and DreamStudio, including the ability to add, remove and replace objects and the image's background. You also have two ways to upscale to higher resolutions, which is great. But where Stable Assistant goes above and beyond is with its additional editing toolkit, which lets you recolor specific objects and create similar variations based on your image's structure or style. Plus you can apply a new style. I used the search and recolor tool to create different variations of iris and eyeliner color from the same base image (left). Created by Katelyn Chedraoui using Stability AI You can also just send follow-up editing requests in a regular message, like with OpenAI's conversational image generators. You can also use your AI image as a base for a new AI video or 3D model, a nice perk that's icing on the cake. Speaking of icing, it's worth noting that Stable Assistant's chat-to-edit function was hit-or-miss. This doesn't matter as much with other tools available to help tweak your images, but this example of a vanilla-and-chocolate cake illustrates how it can mess up. Stability and I have different definitions of what constitutes icing. Screenshot by Katelyn Chedraoui I always encourage people to use style references when they have the chance, and Stability's was decent. You can see how Stable Assistant maintained the color scheme and general vibe of my original photo (left) when I asked for a new image of a couple on a lake (right). Created by Katelyn Chedraoui using Stability AI But if you're looking to AI-ify an image or use AI to change the style of an existing image, you're out of luck. All I wanted was a cartoon version of this guacamole snap I took. Instead, Stability gave me a new version of my previous prompt asking for a forest. Why it made the deer out of tortilla chips, I don't know. Created by Katelyn Chedraoui using Stability AI With so many editing tools, I was initially worried about a quantity-over-quality issue. I got every tool to work at some point, but there were times when the features lacked the specificity and fine-detailed scale I would expect from a more professional program. Like with any AI service, the best way to take advantage of the many editing tools it offers is to spend some time with all of them. It's a learning curve, figuring out what tools will work best in what scenario. For me, playing around with Stability's editing tools was the best part of my reviewing process. How fast do images arrive? Stability was relatively quick, popping out images in 30 to 60 seconds. Stable Assistant only generates one image per prompt, which definitely helps speed things up. DreamStudio lets you generate up to four images at a time. I prefer when AI image generators give me multiple variations, so DreamStudio was great for that. Dramatic ballerinas are one of my favorite tests for AI image generators, and Stability succeeded. Created by Katelyn Chedraoui using Stability AI I'm impressed with Stable Diffusion. But I still have concerns Overall, I was impressed with the creativity, detail and speed of the AI images Stability produced. Stability's raw AI images weren't immune to the hallucinations and errors that plague AI images. There are definitely things I wouldn't use Stability for, like text-heavy imagery. But the sign of a great AI image generator is whether the program offers you tools to fix those mistakes. This is where Stability shines, especially in Stable Assistant, and its editing suite clearly outpaces the competition. But I'm not without concerns. First, it was ridiculously confusing to figure out the best way to use the Stable Diffusion models, whether through Stable Assistant, DreamStudio or third-party platforms. A lot of the user interface settings I wanted in Stable Assistant were available in DreamStudio (like a main library and the ability to select what AI model you wanted to use). But DreamStudio doesn't have all of the editing tools that I enjoyed and used in Stable Assistant. I'm also concerned that the most recent AI SD model underlying both programs, Stable Image Ultra, is a little too good at recognizing and replicating brand-name characters, logos and products. In the future, I would love to see Stability AI more clearly address the differences between Stable Assistant and DreamStudio. I also think future model updates can learn some from OpenAI about legible text generation in AI images. These simple changes would take the frustration out of using what is ultimately a capable AI creative system.

Tengr.ai: A privacy-by-design generative AI platform
Tengr.ai: A privacy-by-design generative AI platform

International Business Times

time28-06-2025

  • Business
  • International Business Times

Tengr.ai: A privacy-by-design generative AI platform

Generative AI (GenAI) is rapidly reshaping industries from media to medicine, although with concerns around privacy, transparency, data integrity and ethics on the rise. A recent report from Deloitte indicated heightened skepticism with over 78% of users finding it challenging to control the data collected about them. AI image generators like Midjourney, DALL-E or Stable Diffusion raise serious privacy concerns, from using personal photos without consent in training data to unintentionally recreating real faces. They've also been used to create fake identities in online scams. As the technology evolves, experts warn that safeguards, especially for vulnerable groups like children; are lagging behind. Hungary based company aims to tackle that, with its privacy-by-design creativeGenAI platform, which is used by over 500,000 users worldwide. What is is the ethical image generation infrastructure that lets users create without censorship issues or data harvesting designed for creators, businesses, educators and more with a strong emphasis on user privacy and creative freedom. The company employs its proprietary Hyperalign™ technology to balance uncensored creative expression with safety. This allows the generation of diverse content while preventing misuse, such as deepfakes or harmful imagery. "Users retain full ownership of the images they create, enabling them to use their creations for commercial purposes without restrictions," says Péter W. Szabó, CEO and co-founder of How Works Unlike competitors that harvest personal data or impose restrictive licenses, is designed with privacy at its core. It does not collect or store any personal information, and users maintain full commercial rights to all images they create. Its Hyperalign™ technology quietly converts risky prompts into safe, compliant results, avoiding the constant battle of traditional filters while maintaining seamless creative freedom. also recently announced its Quantum 3.0; an upgraded image generation engine which sets a benchmark for prompt fidelity, rendering speed, photorealism, all while retaining the existing infrastructure. "Quantum 3.0 Engine uses advanced diffusion-transformer technology to accurately interpret complex prompts, reducing image revisions by 38% and enhancing fine details like hair and typography," says Peter. The Detailer Upscaler 3.0 claims to boost images up to 8x resolution with lifelike textures, offering "Details Only" and combined upscale modes for crisp prints. Its One-ClickBackground Swap, powered by ScenaNova, claims to isolate subjects and create custom backdrops. Why privacy and personal data is important "AI image generators are raising serious privacy concerns," says Peter. From models unintentionally recreating real people's faces to fake profiles used in scams, these tools can misuse personal data in harmful ways. Lawsuits like Getty Images vs. Stability AI highlights the unauthorised use of private photos in training data. Protecting personal data isn't just about compliance, it's about respecting individual rights and preventing real-world harm and to be ethical in an increasingly digital world. introduction into Web3 Brands like Jack Wolfskin and Tesa SE are already using for product visualisation, while the company's architectural partner Zindak AI uses the platform to turn sketches and CAD renders to photorealistic imagery. is also introducing its native $TENGR utility token into its platform to enhance user engagement and expand its ecosystem. Earlier this year, completed an equity funding round aimed at developing and launching its $TENGR utility token, integrating blockchain tech into its platform. Through Web3 initiatives and a utility token, the platform aims to empower and monetise its community in a more collaborative way, ensuring that no personal data is collected or stored, and users retain full commercial rights to every image they generate with ethical solution.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store