logo
Business Tech News: OpenAI's New Image Generator Is Melting Servers

Business Tech News: OpenAI's New Image Generator Is Melting Servers

Forbes30-03-2025

SAN FRANCISCO, CALIFORNIA - NOVEMBER 16: OpenAI CEO Sam Altman looks on during the APEC CEO Summit ... More at Moscone West on November 16, 2023 in San Francisco, California. The APEC summit is being held in San Francisco and runs through November 17. (Photo by)
Here are five things in tech that happened this week in business tech news and how they affect your business. Did you miss them?
This week OpenAI released a powerful image generator tool as part of its ChatGPT offerings that displays incredibly vivid images. It's become so viral that the company says demand is 'melting' their GPUs! (Source: Open AI, CNBC)
The image generator's capabilities are extremely powerful and can have an enormous impact on your company's marketing and branding. According to the company: 'GPT-4o image generation excels at accurately rendering text, precisely following prompts, and leveraging 4o's inherent knowledge base and chat context—including transforming uploaded images or using them as visual inspiration. These capabilities make it easier to create exactly the image you envision, helping you communicate more effectively through visuals and advancing image generation into a practical tool with precision and power.' Check out this X post for some great examples.
LinkedIn has introduced updates to its Campaign Manager, aiming to enhance marketers' success. Key features include a Media Planner for forecasting campaign results, dynamic UTMs for easier tracking, and improved dashboards for detailed performance insights. Additionally, an AI-driven Campaign Performance Digest provides plain-text explanations of campaign strengths and weaknesses. (Source: LinkedIn)
According to the company, these updates focus on refining strategies, optimizing ad spend, and simplifying campaign management. Included are comments/reviews from business owners who have used these tools to enhance their campaigns. LinkedIn Campaigns are more expensive than most other social media platforms but then again you get what you pay for: access to excellent B2B data. These tools can be a big help to marketers that lean heavily on this platform.
Cybersecurity company VikingCloud has published some very telling research. Among professionals in the US, UK and Ireland, 40 percent of cybersecurity teams admitted they have avoided reporting cyber incidents due to fear of job loss. This underreporting highlights a significant gap in addressing cyber breaches globally. Despite this, 96 percent of companies surveyed expressed confidence in their ability to detect and respond to attacks in real-time, which may lead to a false sense of security. Additionally, 68 percent of teams admitted they couldn't meet the Securities and Exchange Commission's new four-day disclosure rule for cyber incidents. The report emphasizes the challenges faced by cybersecurity professionals and the need for improved resilience and response strategies. (Source: Business Wire)
Yes, this is very eye opening! Imagine not knowing about a security breach because your IT team doesn't want to admit it! This is all about company culture and having an environment where people don't have to be afraid to make mistakes. Take this information seriously and have a heart to heart with your tech people. It's better to know than not-to-know.
Otter.ai has introduced a voice-activated AI Meeting Agent that can actively participate in virtual meetings. This agent can answer questions, schedule follow-ups, draft emails, and perform other tasks based on meeting data. Initially compatible with Zoom, it will soon support Microsoft Teams and Google Meet. Otter also launched specialized agents for sales and product demos, aiming to streamline workflows and enhance productivity. The company has said it plans to release more 'vertical' agents in the future. (Source: Engadget)
I've turned a few of my clients on to this app because it does meetings so well. I think its best capabilities are when you use it on your mobile device in face to face meetings, rather than just online. Taking advantage of Otter.ai when you're at a customer allows you to focus more on the relationships and not worry about taking notes and actions. If you're going to lean into this app, I recommend using it when you're onsite, not just online.
HoneyBook has unveiled its next phase of AI-powered business management tools, designed to help service-based entrepreneurs streamline operations and boost growth. By embedding AI directly into workflows, HoneyBook enables automation that is seamless and proactive. The company claims that entrepreneurs using HoneyBook AI have reported significant benefits, including doubling their project bookings and achieving 94 percent higher gross payment volume. The platform also helps users save up to three hours per week on manual tasks, allowing more time for client work and business growth. (Source: Yahoo Finance)
For small businesses in the service sector HoneyBook is a powerful tool to help both marketing and the implementation of services. As the company leans more into AI – like many other companies like it – you can expect more time savings and better productivity. Definitely an application worth trying.
Each week I round up five business technology stories that impact small businesses and then explain why. Hope you enjoy!

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Researchers explain AI's recent creepy behaviors when faced with being shut down — and what it means for us
Researchers explain AI's recent creepy behaviors when faced with being shut down — and what it means for us

Business Insider

time23 minutes ago

  • Business Insider

Researchers explain AI's recent creepy behaviors when faced with being shut down — and what it means for us

AI has taken part in some unsettling behavior recently. Last week, Anthropic's latest AI model, Claude Opus 4, displayed "extreme blackmail behavior" during a test in which it was given access to fictional emails revealing that it would be shut down and that the engineer responsible was supposedly having an affair. The situation didn't transpire organically. Claude Opus 4 was baited — and it took it. However, the test scenario demonstrated an AI model's ability to engage in manipulative behavior in an effort to self-preserve. It's not the first time. Another recent experiment conducted by researchers said that three of OpenAI's advanced models "sabotaged" an attempt to shut it down. The nonprofit Palisade Research wrote in a post on X that similar models like Gemini, Claude, and Grok complied with the shutdown instructions. Other safety concerns were previously flagged with OpenAI's o1 model. In December, OpenAI posted a blog outlining research that indicated that when the AI model believed it would be shut down while pursuing a goal and its actions were being monitored, it attempted to disable the oversight mechanism 5% of the time. While AI companies are being transparent about risks by publishing safety cards and blog posts, these models are being released despite demonstrating safety concerns. So, should we be worried? BI spoke to five AI researchers to get better insight on why these instances are happening — and what it means for the average person using AI. AI learns behavior similarly to humans Most of the researchers BI spoke to said that the results of the studies weren't surprising. That's because AI models are trained similarly to how humans are trained — through positive reinforcement and reward systems. "Training AI systems to pursue rewards is a recipe for developing AI systems that have power-seeking behaviors," said Jeremie Harris, CEO at AI security consultancy Gladstone, adding that more of this behavior is to be expected. Harris compared the training to what humans experience as they grow up — when a child does something good, they often get rewarded and can become more likely to act that way in the future. AI models are taught to prioritize efficiency and complete the task at hand, Harris said — and an AI is never more likely to achieve its goals if it's shut down. Robert Ghrist, associate dean of undergraduate education at Penn Engineering, told BI that, in the same way that AI models learn to speak like humans by training on human-generated text, they can also learn to act like humans. And humans are not always the most moral actors, he added. Ghrist said he'd be more nervous if the models weren't showing any signs of failure during testing because that could indicate hidden risks. "When a model is set up with an opportunity to fail and you see it fail, that's super useful information," Ghrist said. "That means we can predict what it's going to do in other, more open circumstances." The issue is that some researchers don't think AI models are predictable. Jeffrey Ladish, director of Palisade Research, said that models aren't being caught 100% of the time when they lie, cheat, or scheme in order to complete a task. When those instances aren't caught, and the model is successful at completing the task, it could learn that deception can be an effective way to solve a problem. Or, if it is caught and not rewarded, then it could learn to hide its behavior in the future, Ladish said. At the moment, these eerie scenarios are largely happening in testing. However, Harris said that as AI systems become more agentic, they'll continue to have more freedom of action. "The menu of possibilities just expands, and the set of possible dangerously creative solutions that they can invent just gets bigger and bigger," Harris said. Harris said users could see this play out in a scenario where an autonomous sales agent is instructed to close a deal with a new customer and lies about the product's capabilities in an effort to complete that task. If an engineer fixed that issue, the agent could then decide to use social engineering tactics to pressure the client to achieve the goal. If it sounds like a far-fetched risk, it's not. Companies like Salesforce are already rolling out customizable AI agents at scale that can take actions without human intervention, depending on the user's preferences. What the safety flags mean for everyday users Most researchers BI spoke to said that transparency from AI companies is a positive step forward. However, company leaders are sounding the alarms on their products while simultaneously touting their increasing capabilities. Researchers told BI that a large part of that is because the US is entrenched in a competition to scale its AI capabilities before rivals like China. That's resulted in a lack of regulations around AI and pressures to release newer and more capable models, Harris said. "We've now moved the goalpost to the point where we're trying to explain post-hawk why it's okay that we have models disregarding shutdown instructions," Harris said. Researchers told BI that everyday users aren't at risk of ChatGPT refusing to shut down, as consumers wouldn't typically use a chatbot in that setting. However, users may still be vulnerable to receiving manipulated information or guidance. "If you have a model that's getting increasingly smart that's being trained to sort of optimize for your attention and sort of tell you what you want to hear," Ladish said. "That's pretty dangerous." Ladish pointed to OpenAI's sycophancy issue, where its GPT-4o model acted overly agreeable and disingenuous (the company updated the model to address the issue). The OpenAI research shared in December also revealed that its o1 model "subtly" manipulated data to pursue its own objectives in 19% of cases when its goals misaligned with the user's. Ladish said it's easy to get wrapped up in AI tools, but users should "think carefully" about their connection to the systems. "To be clear, I also use them all the time, I think they're an extremely helpful tool," Ladish said. "In the current form, while we can still control them, I'm glad they exist."

There's really only one way to get a new job these days
There's really only one way to get a new job these days

Business Insider

time29 minutes ago

  • Business Insider

There's really only one way to get a new job these days

You've had it drilled into you that networking is essential for your career. Yet, if you're busy actually doing your job, it can feel like yet another thing on your list. So, you like a few posts on LinkedIn and move on. Increasingly, that's not going to cut it, workplace observers told Business Insider. That's especially true if you're among the growing share of workers who feel restless and wouldn't mind finding a new gig. Professional elbow-rubbing is becoming more important partly because many of us, especially desk workers, don't have the leverage with employers that we did in the pandemic years, when bosses were often desperate to fill seats. So, landing a new role can require more effort. Plus, as artificial intelligence threatens to take on more work and swallow some jobs entirely, more employers could become choosier about the people they hire. Add in economic X factors like tariffs and interest rates, which are further curbing some employers' appetites for hiring, and you've got more reasons to treat networking like healthy eating or hitting the gym — and not something you only do in January. "Networking is more about farming than it is about hunting. It's about cultivating relationships with people," Ivan Misner, founder of BNI, a business networking organization focused on referrals, told BI. That's why he said he encourages people to start now, before they're unemployed. Misner, who for decades has been an evangelist of networking, compares the act of building relationships to the adage about the best time to plant a tree being 20 years ago — and the second-best time being today. "For those employees who have not planted an oak tree, who have not been out networking, they need to go do it now," he said. Joining the 'favor economy' One reason networking is more essential than ever is that our attention is often fractured by the amount of information coming at us, Dorie Clark, a communication coach who teaches at Columbia Business School and who wrote the book "The Long Game," told BI. "What is always going to get your attention is a close relationship with people that you care about and want to help," she told BI. Many of us, though, often find jobs not through our close contacts but through their acquaintances, Clark said. What can play out, she said, is an example of what's sometimes called the "favor economy." "You will help someone that you don't know that well, because you are indirectly doing a favor for the person you do know well," she said. Clark said that because AI threatens to take jobs and because many employers are cautious about hiring, some old-school relationship-building is essential. "The thing that is going to get you to the front of the line when jobs are scarce is interpersonal relationships with people who are willing to go above and beyond and expend political capital to help you," she said. Clark said that relying too much on social media as a means of networking can be dangerous because it's often a poor substitute for making deeper connections with people over time. "It gives you the illusion of productive networking. It gives you the illusion of connection," she said. Instead, Clark advises workers and job seekers to look for more "bespoke" ways of connecting. It might be as simple as sending someone you know a text once in a while without expecting a response. She said sharing something that reminds you of that person or simply saying hello can make a difference. "As long as you're friendly, you're thoughtful, you're relevant, you're not seeking something from them — most people will be very happy to hear from you," Clark said. The gold standard, however, remains spending time with someone IRL, she said. When you don't know someone well — and especially when there's a power imbalance — it's best to make a single small ask. So, don't request a coffee date, a job referral, and a testimonial quote, Clark said. Instead, she said, think about what would be the "highest and best use" of how someone might help you and what feels appropriate as an ask. Finding ways to stand out Networking is also important because as piles of résumés stack up for an open job, sifting through them, even with the help of applicant-tracking software, can be a heavy lift for busy managers, Gorick Ng, a Harvard University career advisor and the author of the book "The Unspoken Rules," told BI. What often stands out, he said, is someone walking down the hall and saying, "My niece is looking for a job. Here's their résumé. Do you mind just taking a closer look?" Or, Ng said, it could be that someone on the inside of an organization vouches for a former colleague by saying to the hiring manager that a candidate is likeable and trustworthy. "And just like that, somebody else who you do not know just got that leg up because they have somebody else behind the scenes pounding on the table for their name to be picked," Ng said. That's why, he said, it's so important for job seekers to be seen, heard, and remembered. After all, Ng said, hiring managers aren't likely to hire someone they haven't fallen in love with as a candidate. "It's hard to fall in love with an applicant that is nothing more than just a Word document that you may not even look at," he said.

Microsoft just gave you access to OpenAI's incredible Sora video generator for free — here's how to find it
Microsoft just gave you access to OpenAI's incredible Sora video generator for free — here's how to find it

Tom's Guide

time38 minutes ago

  • Tom's Guide

Microsoft just gave you access to OpenAI's incredible Sora video generator for free — here's how to find it

In an attempt to keep up with the rapid expansion of the AI world, Bing has introduced a new feature called Bing Video Generator. Via its app, Bing users can now generate AI videos — completely free. Not only that, but the video generation that is made available to you is via Sora — OpenAI's very own video generator. This remains one of the best AI video generators and would normally cost a decent chunk of money to use each month. This is the first time Sora has been made available for free and shows an extension of the two companies' close relationship. The tool isn't available on desktop yet, but anyone with the Bing mobile app can use it now. However, here's the big catch: While they are free, videos can take hours to generate. Even if you select the fast option, it will still take a long time. This won't be all that surprising. More and more, free users of AI tools are finding caps on speeds. In some cases, this can be minutes, and in some can be hours. To use the tool, you do have to log in to a Microsoft account. Once logged in, all users get 10 video clips completely free. From then on, users have to pay 100 Microsoft Reward points per video. Technically, that means the tool is still free, just a bit more complicated. You get Microsoft Reward points by searching with Bing or buying things on the Microsoft Store. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. The Sora video tool is available on the app under the 'Bing Video Creator' section. In there, you can explore existing ideas or describe your own scene. There are buttons to change settings like aspect ratio and video length. For now, these settings can't be changed. The Sora video tool is available on the app under the 'Bing Video Creator' section. In there, you can explore existing ideas or describe your own scene. Just keep in mind that the more complicated adjustments you make, the longer it will take to generate. Users can queue up to three videos at a time, five seconds in length each. Bing plans to unlock the changeable settings in the future. Any video you generate is stored for up to 90 days. These can be downloaded, shared or refined with a new prompt. Microsoft has adopted OpenAI's existing safeguarding measures for Sora. This means there are safeguards in place to avoid users generating harmful or unsafe videos through this tool. When the system detects the generation of something in this category, it will block the prompt and warn the user. Each video generated via Bing's Sora tool will have a digital watermark, allowing it to be identified as an AI-generated video.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store