logo
People Can Fly cancels two games and lays off developers

People Can Fly cancels two games and lays off developers

Engadget2 days ago

People Can Fly, the developer of Outriders and Gears of War: Judgement , announced Monday that its ending development on two of its upcoming games due to issues with its publisher and an inability to secure funding to continue development. As part of this decision, People Can Fly will be forced "significantly regroup" and "scale down [its] teams," the studio's CEO Sebastian Wojciechowksi shared in a statement on LinkedIn.
The statement doesn't elaborate on how many staff will be impacted by the cuts, but does call out Project Gemini and Project Bifrost as the two games being cancelled. People Can Fly made the decision to shut down Gemini because the game's publisher failed to provide a publishing agreement and didn't communicate "its willingness to continue or terminate the Gemini project." Without that publishing deal or the funds to continue working on Bifrost — a self-published VR game — the studio was forced to cancel it, too.
This isn't the first time People Can Fly has shut down a project or made cuts to its teams. In December 2024, the studio announced that it was ending development on a game called Project Victoria and also reducing the number of people working on Bifrost. In that same announcement, People Can Fly also revealed that Square Enix was publishing Gemini.
People Can Fly last worked with Square Enix to publish Outriders, somewhat of a minor cult hit now, but not a commercial success at launch. Even with the cuts and cancelled games, the studio still has multiple upcoming projects in the works, including Project Delta, which People Can Fly is creating for Sony and Gears of War: E-Day , which the studio is co-developing with Xbox studio The Coalition.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

SUCCESSKPI WINS PEOPLE'S CHOICE STEVIE® AWARD FOR "INVISIBLE AI" IN 2025 AMERICAN BUSINESS AWARDS®
SUCCESSKPI WINS PEOPLE'S CHOICE STEVIE® AWARD FOR "INVISIBLE AI" IN 2025 AMERICAN BUSINESS AWARDS®

Associated Press

time33 minutes ago

  • Associated Press

SUCCESSKPI WINS PEOPLE'S CHOICE STEVIE® AWARD FOR "INVISIBLE AI" IN 2025 AMERICAN BUSINESS AWARDS®

FAIRFAX COUNTY, Va., June 4, 2025 /PRNewswire/ -- SuccessKPI was named a winner of a People's Choice Stevie® Award for Favorite New Products for the 23rd Annual American Business Awards ® (ABA). SuccessKPI was selected as the favorite new AI product for its Invisible Generative AI (GenAI) capabilities natively integrated into its AI-powered workforce engagement management (WEM) platform. SuccessKPI leverages GenAI throughout all its core solutions, including business intelligence, speech & text analytics, real-time agent assist, quality monitoring and workforce management. The American Business Awards are the nation's premier business awards program. All individuals and organizations operating in the U.S.A. are eligible to submit nominations — public and private, for-profit and non-profit, large and small. Winners of the crystal People's Choice Stevie Awards, as well as all other winners in the 23rd ABAs, will be celebrated during a gala awards banquet on Tuesday, June 10, in New York City. The People's Choice Stevie Awards for Favorite New Products are a feature of the American Business Awards in which the general public can vote for their favorite new products and services of the year. All new products and services nominated in the ABAs' new product categories were included in people's choice voting. More than 3,600 nominations were submitted to this year's American Business Awards for consideration in a wide range of categories, including New Product or Service of the Year, Most Innovative Company of the Year, Management Team of the Year, Best New Product or Service of the Year, Corporate Social Responsibility Program of the Year and Startup of the Year, among others. 'SuccessKPI's GenAI helps solve problems in ways never possible before — making agents, supervisors and analysts more effective, uncovering answers that were previously hidden, and adding context at scale. Crucially, our patent pending Playbook BuilderTM action framework solves the long-standing challenge of turning insights into actions that deliver measurable outcomes,' said Dave Rennyson, CEO of SuccessKPI. Details about the American Business Awards, the list of People's Choice Stevie Award winners, and the complete list of Stevie winners in this year's ABAs are available at About SuccessKPI SuccessKPI is a revolutionary enterprise AI Analytics & Automation company enabling contact centers to utilize artificial intelligence and automation to improve business outcomes and transform customer experiences. SuccessKPI's insight and action platform removes the obstacles that agents, managers and executives encounter in delivering exceptional customer service. We are trusted by some of the world's largest government, BPO, financial, healthcare and technology contact centers in the United States, Europe and Latin America. Learn more at Digital Networks: Web: Blog: LinkedIn: X: View original content to download multimedia: SOURCE SuccessKPI

You can now ask Google Drive to catch you up on file changes your colleagues made
You can now ask Google Drive to catch you up on file changes your colleagues made

Android Authority

timean hour ago

  • Android Authority

You can now ask Google Drive to catch you up on file changes your colleagues made

Edgar Cervantes / Android Authority TL;DR Google has announced a 'Catch me up' feature for Google Drive. This feature summarizes changes made to your files since you last viewed them. The feature is available now but is currently restricted to English. Google has brought generative AI features to many products and services, and the company's productivity tools are no exception. Now, the company is using Gemini to bring you up to speed on file changes in Google Drive. The company announced a 'Catch me up' feature in Google Drive yesterday (June 3), which summarizes the changes made to your file since you last viewed it. This feature can be activated by visiting Google Drive's home page and tapping the star icon next to your file's name. You can also tap the 'Catch me up' button at the top of your home page to view a summary of changes to all your files since you last viewed them. 'Starting today, Gemini can identify relevant files from a user's Drive with changes since it was last viewed and provide an overview of those changes,' the company explained. Google added that 'Catch me up' supports file edits in Google Docs as well as file comments in Docs, Sheets, and Slides. Google stressed that summaries delivered by the feature aren't comprehensive, saying that it aims to reveal 'helpful and important' changes. This could be handy if you frequently collaborate with colleagues and others on documents, spreadsheets, and other files. Google's productivity tools do allow you to track changes via the version history page, while also letting you view comments and other annotations from others within the file. However, this is an easier way to quickly get up to speed on edits and feedback from contributors. In any event, 'Catch me up' started rolling out yesterday but is restricted to English for now. Not seeing it just yet? The company says you might have to wait up to 15 days to see the feature. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

Forget What You Know about SEO—Here's How to Optimize Your Brand for LLMs
Forget What You Know about SEO—Here's How to Optimize Your Brand for LLMs

Harvard Business Review

timean hour ago

  • Harvard Business Review

Forget What You Know about SEO—Here's How to Optimize Your Brand for LLMs

Over the past year, consumers have migrated en masse from traditional search engines to Gen AI platforms including ChatGPT, Gemini, DeepSeek, and Perplexity. In a survey of 12,000 consumers, 58% (vs. only 25% in 2023) reported having turned to Gen AI tools for product/service recommendations. Another study reported a 1,300% surge in AI search referrals to U.S. retail sites during the 2024 holiday season. Consumers who use Large Language Models (LLMs) to discover, plan and buy are on average younger, wealthier, and more educated. Their customer journey no longer begins with a search query or a visit to your website—it starts with a dialogue. Consumers are asking AI assistants questions like 'What's the best coffee machine under $200?' or 'Plan me a weekend getaway that won't break the bank.' For brand leaders, the implications cannot be overstated. Your digital strategy must now include optimizing for AI recommendation engines, not just search algorithms. In short, you must boost LLMs' awareness of your brand. The Rise of 'Share of Model' To date, measuring awareness meant assessing consumers' attention—either offline through recall surveys (e.g., 'Which brands come to mind when you think of running shoes?') or online, through search or social media volumes, manifesting private intent or popularity. But the growing role of LLMs as an intermediary between consumers and brands demands that marketers consider another kind of awareness: how often, how prominently, and how favorably a brand is surfaced by LLMs to consumers. We call this awareness Share of Model (SOM). Think of it as the AI-era's offshoot of share of search ('How much do people search for my brand via-a-vis competitors?') and share of voice ('How much do people talk about my brand vis-a-vis competitors?'). SOM uniquely emulates LLMs' perceptions and recommendations given a prompt, rather than reflecting human intent (SOS) or available content (SOV). Two of the coauthors' marketing agency, Jellyfish, has pioneered a methodology to measure SOM through prompting at scale. Building on this approach, we offer a new three-prong lens to unpack what and how AI 'think' about brands: mention rate, which tracks how often a brand is mentioned by a specific LLM; human-AI awareness gap, which measures the disparity in brand awareness when surveying people vs surveying LLMs; and brand and category sentiment, which breaks down LLMs rationale for recommendations into associated strengths and weaknesses. Take for example the laundry detergent market in Italy. We analyzed the top brands' mention rate among six LLMs using Jellyfish's proprietary Share of Model platform. Two observations stand out. First, brands' SOM varies significantly across the models, reflecting differences in how LLMs process brand information. For instance, Ariel's SOM ranges from almost 24% on Llama to less than 1% on Gemini. Second, some brands are totally absent from at least one model. For instance, while Chanteclair enjoys a 19% SOM on Perplexity, it is missing from Meta. Clearly, LLMs either feature brands or not, unlike search engines or social media where brands that don't excite the algorithm are still represented, albeit less prominently. Failure to register on an LLM means a brand doesn't appear at all before consumers. On ChatGPT, unlike Google, there is no 'page two.' Probing the human-AI brand awareness gap Importantly, a brand's visibility on LLMs can differ significantly from its market share or other awareness metrics. Therefore, brand managers' first task is to probe the link between human awareness (e.g. through SOS or SOM) and LLM awareness of their brands. Quick note: Although a brand's SOM often varies across LLMs as we show above, the next examples in this article focus on brands' SOM across LLMs for ease of discussion. We'll outline the implications of SOM variability across LLMs later. Consider our analysis of U.S. automobile brands' visibility in general and on LLMs during the first half of 2024. We constructed a Human-AI Awareness Matrix (Figure 2) that reflects brand awareness on LLMs, assessed through Jellyfish's tool, and in general, assessed by YouGov market research. Brands fall into 4 distinct categories: See more HBR charts in Data & Visuals Cyborgs: These brands have top awareness in both traditional measures (e.g., surveys, search ranking, share of voice) as well as among LLMs. Take Tesla's position in this chart, for example. Elon Musk's ubiquity helps make consumers highly aware of the brand. Tesla also scores well among LLMs because of the brand's emphasis on its specific features. Its new digital advertising strategy attempts to rise the company's scores even higher among both people and large language models. AI Pioneers: These brands are well-represented on LLMs but lack marketplace awareness. Often, they are AI-native brands or emerging digital players that are niche in broader digital spaces. Rivian's spot in this quadrant likely stems from its resolution-focused (which we'll touch on later) content strategy, which aligns with its positioning as a solution creator. High-Street Heroes: These are established brands with high marketplace awareness but underrepresented or missing in AI-generated content. Case in point: Lincoln, which Frank Lloyd Wright famously said makes 'the most beautiful car in the world.' This is likely due to the brand's focus on intangible attributes such as elegance or heritage, which are less prized by LLMs. Emergent: These brands struggle with low awareness in both the marketplace and among LLMs. They risk falling into digital irrelevance as AI-driven search becomes the norm. Despite its premium positioning, Polestar struggles in our analysis to achieve visibility across the spectrum, reflecting a lack of scaled digital footprint or lack of appeal for LLMs' processing style. The main takeaway? Marketers need to come up with strategies designed to push their brands up the 'consciousness' of LLMs. These strategies are likely to be very different from those designed to appeal to humans. For what we know about LLMs is this: LLMs are not optimizing for attention; they are optimizing for resolution. Identifying the ' job to be done ' thus becomes the number one priority for brand leaders if they want to score big on SOM. How to increase brand awareness on LLMs Our analyses across product categories reveals how models' perceptions of different categories presents specific opportunities to brands in those industries. This has implications for not only what content to produce (across text, image, and video), but also where brands may seek to distribute their messages (website, media, expert, or community contexts). LLMs are looking beyond keywords, focusing on concepts and relationships which create new ways to build brand awareness for LLMs. Brands should create content that explains not just what the product is, but how it relates to broader contexts, use cases and user needs. For example, instead of proclaiming 'we sell superb running shoes', go for 'our carbon-plated midsole design improves performance for long-distance runners.' Brands should also highlight proof of expertise. A skincare brand that references dermatologist-backed studies or links to PubMed research is likely to outshine competitors that don't. Brands that 'narrowcast' about pain points—needs, questions and tasks—are more likely to be surfaced. Brands that simply broadcast may be left out. This could explain why traditional car brands like Lincoln, which push aspirational and marketing-heavy content, are less salient to LLMs compared to Tesla or Rivian, which emphasize functions and features including battery life, tech stack, and software. Similarly, although they dominate SOV, fast-fashion brands such as Shein lags in AI awareness due to an overwhelming volume of undifferentiated content and lack of trust signals such as reviews and certifications. In contrast, the Ordinary brand of skincare products offers highly structured product pages with ingredient explanations, transparent science-backed content (explains the 'how' and 'why' of why a face cream works). Nike and its customer-generated content (runners' blogs, Reddit, Strava), detailed product pages with clear use cases (e.g., 'best shoes for marathon training') and integrated app ecosystems (Nike Run Club, Nike Training Club). Both brands topped their respective category in our analyses. Notably, legacy brands can also thrive in the age of AI—if they invest strategically in relevance, representation, and structured digital storytelling. Case in point: Cadillac. The century-old automobile brand scores highly in both human and AI brand awareness. Campaigns like 'Audacity' and 'The Daring 25' as well as international partnerships helped increase its AI visibility. Gauging LLM sentiment Beyond looking into AI brand awareness and how it relates to other awareness metrics—marketers can also explore brand and category sentiment through sentiment (positivity) and semantics (associated terms). This helps them answer questions such as: What are my brand's perceived strengths and weaknesses? How can I change how LLMs perceive my brand? For example, our analysis of the travel industry in the U.S. shows that LLMs value characteristics such as convenience, variety, and space, with Booking taking the overall top spot across models. We also surfaced brands' strengths and weaknesses relative to their competitors. Vrbo, for instance, scores much higher than Booking on privacy and uniqueness—strengths it could exploit to optimize AI awareness See more HBR charts in Data & Visuals See more HBR charts in Data & Visuals How to Market to LLMs Armed with insights on LLM sentiment, marketers may deploy several approaches to optimize their brand's AI visibility. First, adopt a multi-pronged media strategy that covers text, images, videos, and structured data (e.g., tables, lists, reviews). Content that clearly links brands' offerings to broader contexts, use cases or consumer needs (e.g., 'best EVs for winter driving' rather than just 'electric SUV'), generates strong conceptual associations in LLMs. Brands should also lead semantic niches—specific clusters of meaning where their products naturally fit (e.g., like the Ordinary brands with skincare science). Importantly, just as each social media platform has its own 'rules of engagement'—what works on TikTok probably won't fly on LinkedIn—each LLM applies its unique algorithmic lens to content. Take the U.S. travel industry again, focusing on LLM's perception of Airbnb. While Llama focuses on the uniqueness of a brand's offerings, ChatGPT focuses on the extent to which brands offer local options, whereas Perplexity seems to value flexibility most. This ties in with our point earlier about brands' varying visibility across LLMs. We recommend that marketers tailor content to the LLMs whose processing style best amplifies their brand's content and narrative strengths, even as they apply overarching rules (e.g. solution-oriented messaging) across models. It is a fine balance: Tailoring content to the nuances of a dominant model can drive visibility but spreading efforts too thinly across all LLMs risks diluting impact. The shift away from traditional search engines is not just a technological evolution. It's a fundamental change in consumer behavior that demands corresponding shifts in marketing: from persuasion to precision, from keyword to advice, from market share to problem-share. Do it right, and brands can establish themselves as essential participants in the algorithmic conversations that increasingly shape consumer decisions.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store