Latest news with #3.5
Yahoo
3 days ago
- Business
- Yahoo
MongoDB Soars 14.2% After Crushing Q1 -- $1B Buyback, AI Push, and Customer Surge Spark Rally
MongoDB (NASDAQ:MDB) is off to a fast start in fiscal 2026and investors might want to take a closer look. The company reported $549 million in Q1 revenue, up 22% from last year, with its cloud product, Atlas, growing 26% and now making up 72% of total sales. Management added 2,600 new customers, marking the biggest quarterly gain in six years. The share is up 14.2% at 12.09pm today. CEO Dev Ittycheria pointed to strong traction from both enterprises and startups as AI workloads and modern app development continue to drive demand for flexible, cloud-native databases. Behind the scenes, MongoDB is becoming a cash machine. The company more than doubled non-GAAP operating income to $87.4 million and posted $105.9 million in free cash flowup 74% year-over-year. With $2.5 billion in cash and short-term investments on hand, it just authorized another $800 million in share repurchases, taking its total buyback program to $1 billion. That kind of financial firepower could give MongoDB more room to support long-term growth while returning capital to shareholders. On the AI front, MongoDB isn't just playing defense. It rolled out new retrieval modelsVoyage 3.5 and 3.5 Litethat improve accuracy and efficiency for building AI-powered apps. It also debuted its Model Context Protocol Server, which connects MongoDB to tools like GitHub Copilot and Anthropic's Claude, letting developers use natural language to interact with their data. With FY2026 revenue guidance raised up to $2.29 billion and full-year non-GAAP EPS projected to hit as high as $3.12, MongoDB could be shaping up as a quiet leader in the AI infrastructure race. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
3 days ago
- Business
- Yahoo
Nectar Social buzzes out of stealth with $10.6 million backing from GV and True Ventures
Going viral isn't luck. 'A book that I'll write one day is going to be called 'Manufacturing Virality,'' said Misbah Uraizee. 'It absolutely is manufacturable, and there's a playbook.' That playbook includes selective engagement, showing up in search, and partnerships with micro-influencers—not celebrities. It means seeding content, hosting real-life events, and using tools like polls and DMs to spark interaction. The goal: Build a community that talks, tags, and buys. Because even if virality can be engineered, it still has to come from somewhere real. 'For all of our customers, organic communities are the hardest,' said Uraizee. 'You've got paid metrics, you can buy more reach, you're able to scale at a certain length. But your organic traction—your organic community and how you show up there—is really the most authentic representation of your brand, and how viral or how much traction it has.' Misbah and her sister Farah Uraizee, both former Meta leaders—Misbah in product, Farah in engineering—left in 2023 to cofound Nectar Social, an agentic AI social commerce startup. Translation: The startup is looking to channel the social activity jungle into quantifiable revenue for brands. 'We saw how much private messaging was blowing up, and how terrible the internal elements were for brands to process and scale to that,' said Farah. 'Then, right at the cusp of [OpenAI's GPT] 3.5 launching, we were like: This is the moment. This is where you can start solving these problems in an authentic way. And that made us immediately jump on it.' Now, about two years after the Uraizees walked away from Meta, Nectar Social is launching out of stealth. The company has raised $10.6 million in combined pre-seed and seed funding, led by GV and True Ventures. BAM Ventures, Charge Ventures, FAB Ventures, Flying Fish Ventures, Mercury Fund, Trust Fund by Sophia Amoruso, and XRC Ventures participated in the round. Nectar's customers include Bobbi Brown's Jones Road Beauty, soda brand Olipop, viral California makeup brand Tower28, and skincare device brand Solawave. (On Instagram alone, this group has a combined more than 1.8 million followers.) GV's bet is that Nectar will become essential to brands looking to compete in the crowded e-commerce space, post-DTC boom. 'Brands are inundated by DMs and comments,' said Frédérique Dame, general partner at GV. 'And they need to convert the customer right away on the spot. When the customer asks a question about the product, they're ready to buy. And Nectar is shrinking the actual funnel to instant purchase.' Nectar's tools include AI 'social copilot' agents that autonomously handle community interactions, real-time data, and revenue attribution metrics that tie DMs, mentions, and comments to purchases. (Nectar said that it's already managed more than 500,000 DMs, comments, and mentions, while reducing customers' average response times to about five minutes for inbound social media queries.) And yes, there's absolutely a Gen Z angle here—Salesforce data suggests that 76% of Gen Z is finding products through social media. 'Social is becoming the go-to place for people to discover a brand, build a record with a brand, interact and close the loop on the brand,' said GV's Dame. 'But the magic of this is that it goes from inside out. They can extend it to your email ecosystem.' Misbah and Farah are five years apart, and though their early years took them from Singapore to Sweden to Ireland, much of their childhood was spent in Northern California, where their parents worked in the telecom industry. Nadeem and Amtul Uraizee, their parents, told Fortune via email that they weren't surprised the sisters decided to start a company together. One thing they emphasized: Misbah and Farah have always had a habit of disagreeing—thoughtfully. As kids, they would debate vigorously across family road trips. 'But the moment one of them ever hit any rough patches, the other would drop everything to help,' Nadeem and Amtul wrote to Fortune. 'That's just who they've always been: each other's biggest critics and biggest champions. Now, seeing them run a company together, it honestly feels like the same relationship playing out on a bigger stage. They still disagree (probably more than ever), but they've learned how to turn those disagreements into something meaningful that they can build together.' And what they're trying to build together is a ubiquitous but invisible engine for, essentially, going viral. Conceptually, going viral is interesting for its odd combination of science and mystery. There are, as Misbah points out, ways to build virality. But when something goes viral, it's ultimately because people care genuinely in some way. Which isn't a one-way street—a brand on social media has to consistently show (at scale) that it cares about its customers. 'The best brands are hyperactive,' said Farah. 'They respond to every single person, and they don't only respond—they proactively encourage it.' See you tomorrow, Allie GarfinkleX: @agarfinksEmail: a deal for the Term Sheet newsletter here. Nina Ajemian curated the deals section of today's newsletter. Subscribe here. This story was originally featured on


Time Magazine
05-05-2025
- Entertainment
- Time Magazine
Why This Artist Isn't Afraid of AI's Role in the Future of Art
As AI enters the workforce and seeps into all facets of our lives at unprecedented speed, we're told by leaders across industries that if you're not using it, you're falling behind. Yet when AI's use in art enters the conversation, some retreat in discomfort, shunning it as an affront to the very essence of art. This ongoing debate continues to create disruptions among artists. AI is fundamentally changing the creative process, and its purpose, significance, and influence are subjective to one's own values—making its trajectory hard to predict, and even harder to confront. Miami-based Panamanian photographer Dahlia Dreszer stands out as an optimist and believer in AI's powers. She likens AI's use in art to the act of painting or drawing—simply another medium that can unlock creative potential and an artistic vision that may have never been realized without it. Using generative AI models like Stable Diffusion, 3.5, Midjourney, Adobe, Firefly, and Nova, Dreszer trained an AI image generator on her style for over a year, instructing it to produce artwork with her sensibilities, with one piece in her current exhibition produced entirely by AI. Entitled 'Bringing the Outside In,' Dreszer calls the show a 'living organism.' (It is on display until May 17, 2025 at Green Space Miami.) Her vivid, maximalist still lifes depict layered familial heirlooms, Judaica, flowers, and textiles made by Panamanian indigenous women. Attendees can interact with an AI image generator in the exhibition to produce their own artworks in Dreszer's style, telling the machine in a sentence or two what they want it to produce, and in seconds, an artwork is created. Also as part of the show, Dreszer programmed an AI-generated clone of herself, which looks and speaks like her, to guide visitors via video chat through the space. This interview has been lightly edited for length and clarity. TIME: Take me back to the first moment you realized AI could enhance your art. What about AI drew you in? What did you feel? Dreszer: I believe technology is here to supercharge us. When generative AI entered the mainstream, I knew I wanted to get my hands dirty right away. I was already in the world of NFTs, but this was a different conversation. It took over a year of experimentation and dialogue with image generators to feel comfortable finally creating a piece to include in a body of work. This exhibition includes one piece I made in collaboration with AI. I personalized an AI image model on what the exhibition means, feels like, and looks like, feeding it images embodying my style. I included the Florida Everglades in the foreground, reflecting the landscape where I'm living today. I'm not only interested in AI and art, but also in adding nature to that conversation. I've hung flowers on top of this piece that fall onto the frame or the ground when they die, allowing nature to do its thing. I have not intervened physically. I believe nature, art and technology can coexist nicely. I actually thought that all the pieces in your exhibition were produced by AI. That's also the intention, right, because they are not. I'm always trying to play with the viewers, to disorient, because everything is not what you see at first glance. There's no artificial enhancements in most of these works, but just the fact that you think there are—I find that narrative interesting. What inspired you to create a clone for this exhibition? My clone is so fun. I'm trying to pose questions to the community as they engage with these works: Moving forward, what does it mean for relationships when we're speaking to a machine as if it was a human, and we cannot know the difference? What is our role as humans if we have clones that can mimic what we do? I want to see how that dialogue evolves. There's a practicality as well. The clone guides you through the show, probably better than I can. It's trained on what I know, but as a machine, it's supercharged. Why did you include your clone? I wanted to have an AI version of myself to guide viewers and answer questions, to educate others in order to demystify AI. Through the clone, I can humanize the technology, 'the art of the possible,' of incorporating technology into artistic workflows. Will you keep your clone after the exhibition? Will you educate it about other parts of yourself? I'm very interested in continuing the relationship with her. I'm working through ideas and ways to train her. I haven't shared it yet, but there are different personas of the clone. I'll be fine tuning and creating different versions based on the relationship I want her to have with the audience she's engaging with. Some critics would call the use of AI in art 'cheating.' What do you say to those critics? I'd love to have a conversation to understand how that opinion was formed. I'd encourage them to see it as a collaboration. Many people don't understand the process and the time it takes. I would invite critics to dive deeper, and think about it not just as: 'I put in a prompt, it makes art, then I'm done.' It's a long this relationship between technology and the arts is not new. We've had disruptions in art through technology before. This is just more aggressive, intrusive, and rapid in its speed and pace of innovation. What specific challenges have you faced so far using AI in your art? Oftentimes the outputs are not what I wanted. As an artist, I have high expectations. I like to control the visualization so it's highly stylized, curated, and composed. With AI, that control goes away, because AI has its own intelligence and creativity, no matter how good the prompt is. It's a hard and frustrating yet also enlightening process; it may not create what you wanted, but it can make something you didn't know you wanted. Then there's technical things it doesn't know how to do, but eventually will. It's not great with certain renders or visualizations. What scares and excites you about where AI is headed for the next generation of artists? I'm mostly excited because of the rapid pace. Updates to generative AI software happen in a matter of weeks. There's also a healthy competition in the market, which means that as users, our needs are being satisfied quicker than ever. Our feedback is being incorporated and the tools are changing. You asked about fears. AI is entering our workflows and industries in one way or another. Will we accept it? Deny it? Who will fall behind, and who will be at the forefront? I'm more excited than fearful, but I see why others may be fearful. It disrupts our workflows, and if we're not ready to change or learn new skills, it can be scary. Will collaboration with AI replace collaboration between artists? No, no, no. There are many examples of how me and several artists have collaborated with AI. One artist came to me with her artistic vision and her words, and I used my prompt engineering skills and knowledge of AI systems, and together, we created an AI piece that was her vision come to life—this beautiful red textile tree that had a huge trunk. As an artist, there is a journey one goes through when creating. When you use AI, does it still allow you to access this other-worldly experience of the creative process? There's definitely parts of the creative process that AI is not inclusive of. So for example, when I'm making AI art, I'm not painting, or getting my hands dirty. There's physicalities that are not included in that journey. But I think that's similar to any medium. So let's say I'm choosing to use my camera as my tool and not a paint brush. There's also experiences that are missed out through my photographic artistic process, that if I were using a paintbrush or another tool, would be a different journey. So that's why I see generative AI art as its own medium, and each medium comes with its own journeys and processes that are exclusive to that medium, right? Do you see the term 'post-human' as an accurate way to reflect this era we are entering in art? I would divert a bit from 'post-human.' I see AI more as a booster, not a replacer, but an accelerator and an enabler. So, if 'post-human' means it's a replacement, then I would lean in more to the perspective of AI as a turbo supercharger that us humans can carry with us to bolt forward. I think it could replace mundane tasks that we may not want to do. And that's where the beauty of the collaboration comes in, where we give it these tasks so our human brains reach our fullest potential, because then the low value tasks we can outsource into generative AI. How do you think historians will look back on this particular era of rapid expansion with AI? We are in the foundation era. Everyone knows what ChatGPT is. We've passed the point of inflection, and now we're at a point where industries, individuals, businesses, and creatives are finding their place in AI. How are we adapting–or not–to it? Time is of the essence. What we decide to do now, literally today, versus in a week or two, or three, or in a month, will define the next five to 10 years.


Forbes
04-04-2025
- Business
- Forbes
From Optional to Essential: Three AI Leadership Must-Haves
Gemini-generated image Last Fall, I began exploring and writing more about our core responsibilities as learning and talent leaders interacting with AI: building AI literacy, fostering experimentation, and leading through change. After covering the first two, I assumed writing about change leadership would be straightforward. It wasn't. This isn't like any technology transition we've led through before. AI isn't just another technology leaders must adopt; it fundamentally reshapes our relationship with work itself. Culture, managers, and teams have always mattered, but how we must now leverage them has dramatically changed. Successful AI integration demands explicit shifts: organizational culture must deliberately embed practices that build ongoing trust; managers must transition from expert answer-providers to skilled questioners who model curiosity and judgment; and teams must evolve from isolated task performers to adaptive collaborators who fluidly engage internally and across functions. These aren't optional refinements—they're essential shifts for organizations to fully realize AI's potential. My own journey illustrates this unprecedented challenge. Just as I developed a workflow with Claude 3.5, the 3.7 Sonnet version emerged with capabilities that fundamentally changed how I leveraged it. ChatGPT 4.0's release forced me to rethink my customized prompts, while Deep Research transformed the research part of my writing process in ways I hadn't imagined possible. Each evolution didn't just add features—it shifted my entire collaboration system with AI. I joined the "Women Defining AI" community, learned to build a chatbot based on my book ReCulturing, pivoted toward an app through Vercel, and then a colleague passionately advocated for Lovable—each tool compelling yet disruptive enough to reset my approach. Even my writing process shifted dramatically after enrolling in Every's "Writing with AI" course, demoting AI from editor-in-chief back to my trusted assistant. Now, with agentic AI leading the conversation, I'm experimenting again with AI-assisted hotel reservations for our trip across the Middle East. This isn't about keeping up with new AI features—it's about fundamentally reshaping how we lead and work. Unlike previous technology transitions, which followed predictable roadmaps and stable best practices, AI integration demands continuous adaptation. We aren't simply teaching teams to use new tools; we're guiding them to form working relationships with technologies that think, learn, and evolve alongside them. This introduces unprecedented psychological and practical challenges for change leadership. Past technology shifts—from desktops and mobile devices to cloud computing—allowed gradual implementation over stable timelines. AI disrupts this rhythm dramatically, changing weekly or even daily. It isn't a static system we learn once and upgrade occasionally; AI continually surprises us, adapts, and pushes us to rethink our roles and interactions. For leaders, this creates an essential paradox: How can we effectively lead others through changes we ourselves are still learning to understand? Traditional change models, such as the familiar "unfreeze-change-refreeze" approach highlighted by Michael Mankins and Patrick Litre in their recent HBR article, 'Transformations that Work,' simply don't apply. Bain's research confirms the urgency of shifting toward continuous transformation: while over one-third of large organizations constantly launch change initiatives, only 12% achieve lasting success. Organizations that intentionally embed trust-building practices into their culture significantly improve their odds of sustainable AI integration. The stakes for leaders have never been higher. Unlike a typical CRM rollout, mismanaged AI adoption can quickly erode trust, amplify biases, or permanently disrupt team dynamics. Effective change leadership requires promoting active experimentation while establishing clear ethical guardrails—a balancing act unique to AI's complexity. Ultimately, successful AI adoption depends far less on mastering the latest tools or algorithms and far more on intentionally strengthening the foundational human elements essential to change: culture, managers, and teams. The fundamental challenge of AI adoption isn't technical—it's emotional. Employees aren't just learning new tools; they're grappling with existential questions: Will AI make my expertise irrelevant? Can I trust its outputs? Who's accountable when AI makes mistakes? These concerns aren't theoretical. McKinsey found that although 78% of organizations adopt AI, employee resistance remains a primary barrier. Similarly, multiple research reports over the last ten years show that 70% of digital transformation failures result not from technology but from organizations' inability to shift habits and behaviors. Without trust, AI strategies merely automate inefficiencies and amplify existing fears. At Udemy, we experienced this firsthand. Initially focused primarily on AI training and experimentation, we quickly discovered that real success required rebuilding the psychological contract between leaders and employees. Instead of pretending AI would seamlessly enhance everyone's work, we openly acknowledged the messy reality: AI would disrupt familiar practices, create new anxieties, and force us to redefine successful collaboration. As I detail in ReCulturing, sustainable organizational change—especially involving AI—requires aligning strategy explicitly with behaviors and practices. For instance, our value of "Courageously Experimental" translated directly into trust-building behaviors: The result wasn't just increased AI adoption—it was a deeper level of organizational trust. IBM's research supports our experience: Their 2024 CEO Survey found that 64% of leaders now recognize that AI success "depends more on people's adoption than the technology itself." Trust isn't built through grand AI initiatives but through daily practices that demonstrate we value human judgment as much as artificial intelligence. This foundation becomes essential because AI isn't a one-time change—it's a continuous evolution that requires a sustained commitment to experimentation, transparency, and, above all, trust. While culture creates the context for trust, managers are the ones who make it real. They must navigate paradoxes that didn't exist with previous technologies: How do we maintain leadership authority when AI tools might know more than we do? How do we drive automation while ensuring your team continues to develop critical skills? When should we trust AI's recommendations, and when should we lean more heavily on human judgment? McKinsey's research also highlights this tension: "The biggest barrier to scaling AI is not employees—but leaders who aren't steering fast enough." Managers need to actively model curiosity and questioning to encourage employees to ask better questions about themselves, each other, and the AI itself. Active questioning isn't new, but organizations vary greatly in how deeply they embed it as a managerial practice and develop it as a core skill. I still work with many managers who think their value comes from being the expert and being able to answer questions rather than knowing when and how to ask the right question. This skill of asking questions is important for managers' teams as they re-think the collaborative system of their teams working with AI. Managers need to model asking questions so that employees can ask better questions of themselves, their colleagues, and the AI. Throughout my twenty-five-year career developing leaders, I've consistently found that strong managers are the key to successful change initiatives. At Udemy, this proved especially true with AI adoption. When we initially rolled out AI tools, the teams that successfully integrated AI weren't those with the most technical expertise. They openly shared their AI learning curves, including mistakes and uncertainties, protected their teams' time for experimentation while maintaining clear performance standards, and deliberately balanced AI and human contributions. Additionally, they engaged actively in team discussions around redefining roles and emphasized the importance of human skills such as strategic thinking, relationship building, and complex decision-making. Conversely, AI adoption stalled when managers avoided or pushed the technology too aggressively without addressing team concerns. While the manager's role has gradually evolved for years—from being the primary source of answers to becoming a skilled curator of insightful questions—AI has dramatically accelerated this shift. The evolution from expert answer-provider to strategic questioner is no longer optional; it's now essential. Managers who continue to rely solely on their own expertise will increasingly struggle, whereas those who consistently ask critical questions—Which AI applications genuinely advance our goals? How can we preserve and enhance our unique human strengths? What new capabilities must we build next?—will position their teams and organizations for successful AI integration. The fundamental challenge teams face with AI isn't just learning new tools—it's the unprecedented speed of change. Traditionally, teams adapt through structured learning and development sessions and gradual implementation. However, AI's rapid evolution disrupts this familiar approach. Processes documented today might become obsolete tomorrow, and best practices often become outdated before teams can fully adopt them. While organizations have discussed upskilling extensively over recent years, new MIT research highlights that traditional training methods are still insufficient for AI adoption, highlighting that 55% of organizations report workforce skills becoming outdated within months, not years. Collaborative and social learning have long been cornerstones of effective development. Over the past two decades, formal training combined with "on-the-job" experiences has evolved into blended learning strategies. Yet, in the AI era, cohort-based collaborative learning isn't merely beneficial—it's essential. Traditional methods, such as slide decks and workshops, simply can't match AI's relentless pace, often becoming outdated before implementation. To address this challenge, our team created learning networks where members could explore, share discoveries, challenge assumptions, and build on each other's experiments in real time. What began as informal conversations quickly evolved into a dynamic community capable of adapting as rapidly as AI itself. Similarly, external communities such as Women Defining AI provided fresh perspectives and inspiration, demonstrating firsthand that knowing what and how to ask is as critical as knowing the answers. Yet effective collaboration alone isn't enough. Teams must also cultivate what computer scientist Jürgen Schmidhuber first termed adaptive confidence—the ability to learn, adjust, and innovate within uncertain environments continuously. Introduced in Schmidhuber's 1991 technical report, Adaptive Confidence and Adaptive Curiosity, the concept originally described how adaptive systems reliably navigate uncertainty. Today, adaptive confidence extends beyond skill-building; it requires fundamentally resetting team practices. Rather than merely mastering new tools or skills, teams must embrace ongoing experimentation, regularly challenge assumptions, actively engage in iterative learning, and collectively apply human judgment to understand and leverage AI's evolving capabilities. Embedding these strategic shifts into our culture, managerial approach, and team dynamics positions us to actively adopt, shape, and successfully integrate AI. Potential actions you can take today in these three areas are: Effective change leadership in the AI era requires intentional trust-building, managers who actively model strategic questioning, and teams capable of dynamic collaboration. How are you preparing your culture, managers, and teams for these new realities?