logo
The Price Hikes For Games On Switch 2 Are Likely From Publishers

The Price Hikes For Games On Switch 2 Are Likely From Publishers

Forbes05-04-2025
Since the reveal of the Switch 2, many have commented on the price hike for games on the system. The reality is that this is likely coming from publishers.
While some have obviously pointed to inflation and shown that older Nintendo system game prices were equivalent to this price hike, the reality is that the cost of these games relative to the first Switch has increased substantially.
However, the finger-pointing at Nintendo for driving this increase is not really accurate. Especially as what drives much of a platform holder's pricing comes from third-party publishers.
The success of any gaming platform ultimately stems from its support from third-party publishers. Many of these publishers, especially the larger ones, subsequently have a big say in pricing policy on said platforms.
As the Switch 2 is the first of the next wave of consoles, Nintendo has been tasked with starting the price increase according to what their third-party partners likely want.
The reason why publishers are doing this is very simple: the last decade has resulted in catastrophic losses for most publishers worldwide. The constant layoffs have been the biggest and clearest symptom of this situation.
However, these publishers need to start making their money back quickly, so a price hike on a new platform will, in theory, give them that cash injection they so desperately need right now.
The fact that Nintendo is taking this full-on is also another trait of publishers often hiding their decision-making from public view, such as having Nintendo announce the price hike. Still, I sense Nintendo's hands are tied on this one.
Unfortunately, as is often the case, gamers are being hit hard by the results of publisher management's poor decision-making.
The sad thing, apart from the game price increases, is that they don't help the existential threat of the institutional knowledge being lost from all these developer layoffs.
While it may stabilize the finances for publishers, the beating heart of gaming still needs to be resuscitated. Simply because without a wide array of financially healthy game developers, the future of gaming as a medium and as an industry looks very bleak.
Follow me on X, Facebook and YouTube. I also manage Mecha Damashii and am currently featured in the Giant Robots exhibition currently touring Japan.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Social media is dead. Meta has admitted as much. What now?
Social media is dead. Meta has admitted as much. What now?

Fast Company

time13 minutes ago

  • Fast Company

Social media is dead. Meta has admitted as much. What now?

Back in March, Facebook introduced a new feature that wasn't exactly new. The Friends tab—described by Meta CEO Mark Zuckerberg as 'a throwback to OG Facebook'—is a way for the app's users to see only the latest posts from friends, and none of the algorithm-recommended content otherwise dominating their feeds. Personal social networking, once Facebook's core product, had finally been relegated to a nostalgic lark its users could whimsically opt into. Less than a month later, with its years-in-the-making antitrust trial, the Federal Trade Commission sought to prove Meta's early-2010s acquisitions of Instagram and WhatsApp gave it a monopoly on personal social networking sites. Over the course of the trial's six weeks, Meta's defense emerged: a precise accounting of why Facebook's new Friends feature feels so quaint and retro. As detailed in a just-released post-trial brief, Meta's argument is that it can't possibly have a monopoly on personal social networking—because personal social networking no longer meaningfully exists. That's entirely due to the way people now use Facebook and Instagram, with most of them drawn to what Meta calls 'unconnected' content, from accounts users don't even follow. The protracted legal battle began in December 2020, when the FTC filed its lawsuit against Meta in conjunction with 46 states. The agency claimed Meta had scooped up smaller social startups like Instagram to extinguish their threat to its supremacy, and used the reach of its platforms to slow down user growth among competitors. (Less compelling for the FTC, apparently, were the instances in which Meta allegedly just copied features from the companies it didn't acquire.) Unfortunately for the FTC, the lawsuit began taking shape right as TikTok was enjoying the kind of explosive growth that Meta's monopoly is meant to have made impossible, and right as that growth's seismic impact on the entire social media landscape settled in. In its earliest days, Facebook was all about connection, rather than content. People used it to build digital rapport with new friends, get back in touch with old ones, and keep tabs on crushes. Gradually, though, the site's News Feed began absorbing more and more of the greater internet around it, to discourage users from ever leaving. Even before TikTok's For You page hit social media like a nonchronological atom bomb, Meta seemed to realize that content relevance was driving engagement more than friendship strength, and began peppering in 'unconnected' posts that algorithmically matched user interest. TikTok's ascendance merely accelerated the shift toward unconnected-ness. Log into Facebook in 2025, perhaps to search for a used couch on Marketplace, and what awaits between updates from friends is a heady brew of sponsored posts, straight-up ads, dispatches from various celebrities and politicians, and, yes, a bottomless well of short-form video content. The significance of Meta's evolution, though, seems lost on the FTC. The agency has worked itself into contortions to argue that Meta's primary offering is still 'personal social networking,' and that Meta isn't competing against TikTok or YouTube. In its opening statement, the FTC narrowly defined the market for its antitrust case, citing Meta competitors as insignificant as BeReal and MeWe, while excluding obvious peers such as X and TikTok, along with YouTube. With this puzzlingly limited definition of the services Facebook and Instagram provide, the FTC claims Meta's market share of personal social networking sites amounts to 78% of all monthly active users and 85% of time spent in-app. That assertion holds water, however, only if the mid-aughts version of social media were still a market any of these apps is currently competing to dominate. During the trial, the FTC thoroughly emphasized TikTok and YouTube's disinterest in 'friend sharing' as a means of differentiating them from Meta's apps. Adam Presser, who leads operations at TikTok, testified that only around 1% of users' time on TikTok is spent on the app's Friends tab. (The company only keeps that feature around, he claimed, in hopes that it might eventually enhance users' experience in some way.) The FTC further revealed during the trial the failure of YouTube's mid-2010s experiments with adding social features like private messaging, and that YouTube has since abandoned friend-sharing as a goal. If anything, the FTC may have been too convincing in its portrayal of the short-form video giants' indifference toward friend sharing. By doing so, the agency left an opening for Meta to argue why its own apps are now similarly inclined. People's habits have shifted away from friend sharing In its posttrial brief, Meta reveals the full extent to which it was rattled by TikTok's late-2010s success with both short-form video and AI recommendations, which the company claims slowed user growth for both Facebook and Instagram. 'Meta consequently made a major strategic shift to respond to competition,' the brief states. 'It invested billions of dollars to develop its own AI-recommendation algorithms to rival TikTok and introduced a new feature (called Reels) to serve the demonstrated consumer demand that was shifting away from friend sharing.' The document goes on to mention that a Meta executive, whose name and title are redacted, has been 'paying creators hundreds of millions of dollars' to secure exclusive content for Instagram. For better or worse, to suggest that Meta has not been competing in the arms race for unconnected video content is to deny reality. At the same time, Meta's users have demonstrably gravitated to content over connection. The posttrial brief cites a 2023 experiment to determine what most engages Facebook users. Upon increasing 'friend-original' content in users' feeds by 20%, the company reported that users began spending less time on the app. When Meta took the opposite tact, however, serving more short-form video content instead, users stayed locked in longer. Skip ahead to 2025 and Meta now claims users spend only 7% of their time on Instagram and 17% of their time on Facebook consuming content from online friends. advertisement Social media as an industry is now more than 20 years old. At the time Facebook first hit critical mass, adults may have been thrilled with the novelty of being digitally linked to so many friends and acquaintances. Zoomers, on the other hand, have grown up with social media and have been able to choose whether and how to connect online with friends their entire lives. Many now seem to prefer doing so in group chats and messaging apps. If they come to Facebook at all these days, many apparently do so as yet another means of consuming content. According to Meta's posttrial brief, 'The number of new young adult monthly active users with zero friends after 90 days on Facebook has increased from only 8% to 10% in 2012 to nearly 50% today.' In retrospect, Meta may have rolled out its 'OG Facebook'-style Friends tab less than a month before the antitrust trial began just to prove how uninterested today's users are in friend sharing. The posttrial brief cites, in its as-yet-unsealed evidence, 'de minimis usage of the new dedicated Friends Tab' as confirmation that 'the puck is moving elsewhere.' Where social media goes from here is yet to be determined So, where is social media heading? There are plenty of hints in where it already is—much of them having to do with AI. Social apps are currently inundated with all manner of AI slop. A Cornell University study found that during the 2024 election about 12% of images and 1.4% of text posts on X were AI-generated. More recently, TikTok has seen a surge in AI-generated video content—with a clip of bunnies on trampolines, created by Google Veo 3, garnering more than 230 million views on the app this summer. And beyond the AI that users are posting to these apps, the platforms have been experimenting with AI chatbots as a new form of 'friend' to connect with. So far, the results have been decidedly mixed. Back in March, for instance, a Facebook Messenger chatbot named 'Big sis Billie' reportedly lured a cognitively impaired man to a physical address across state lines. The man tripped and fell along the way, ultimately dying from his injuries. Early on in the antitrust trial, Zuckerberg described his vision of social media's future. Despite the relative failures of the Metaverse and Apple Vision Pro recently, the Meta CEO predicted the rise of 'increasingly immersive content' beyond video, claiming, 'we're just about due for this next major transition' to smart glasses that blend 'the physical and digital world together.' It remains to be seen, though, how much consumer demand exists for social media to become more like an augmented reality game. Perhaps the future of social media is group chat apps like Geneva, Internet 1.0-aping 'social magazines' like Perfectly Imperfect, or subscription-based micro-communities on Patreon built around shared interests in a podcast or creator. On a long enough timeline, though, every done-to-death trend becomes ripe for renewal. (See: our reboot-filled box office, or Uber's obsession with reinventing the bus.) It may just be a matter of time before user fatigue from connecting with people across disparate sites and apps leads a Silicon Valley wunderkind to bring everyone together in a massive digital community. Sort of like a social network.

Our Communications Have A Credibility Problem
Our Communications Have A Credibility Problem

Forbes

time15 minutes ago

  • Forbes

Our Communications Have A Credibility Problem

Jennifer Best is the head of marketing for AmICredible, the smart platform where credibility starts with you. If the potential for misinformation makes you not want to read anything online, you aren't alone. A 2024 Pew Research report noted that 40% of Americans who get their information online say inaccuracy is the aspect they dislike the most, a number that's increased nearly 31% over the past five years. For the better part of a decade, we're increasingly aware that misinformation and disinformation exist, yet few understand the difference. Misinformation, or the inadvertent spreading of false or misleading information, is typically when people don't realize they are sharing something false. Its more sinister counterpart is disinformation, when a blatant and coordinated attempt to confuse and mislead is malicious and intentional. These challenges are leading to an erosion of our trust in media and online news platforms. A 2023 study from the University of Southern California found that frequent, habitual users on Facebook forwarded six times more fake news than occasional or new users. Once false information spreads, it's much harder to control or limit. The Online Credibility Conundrum The internet thrives on sensationalism, not credibility. Online platforms prioritize content that gets a lot of engagement (clicks, shares and comments) regardless of how factually accurate it might be. After all, doing so is good for business; it keeps users engaged, stock prices climbing and board members happy. But it does little to stem the spread of misinformation, not to mention that it's not always easy to differentiate legitimate news from clickbait. You might be asking yourself, 'Is everything I read online false or misleading?' No, but based on the research, odds are that there is misinformation in your news and social feeds right now. Nearly 80% of Americans are concerned that the information they see online is fake, false or a deliberate attempt to confuse people. You're definitely not alone. While some may be quick to blame recent advancements with artificial intelligence (and there are plenty of challenges there to choose from), the reality is this: The Internet Age democratized the sharing of content, giving writers the opportunity to create and publish what they believe is an authoritative piece of content and maximize its visibility, regardless of factuality or truth. The internet leveled the playing field for online content creators. This is when the sharing of misinformation online began—not with the launch of social media or recent advancements with AI, although both of these have perpetuated the spread of misinformation. The problem has now become too large to ignore. Communications In The Era Of Misinformation As communications professionals, we know much of our professional value lies in our ability to be influential. Our corporate reputation becomes our personal brand, and it's how we present ourselves in public, speak to the media and make the case for investment when budgets are tight. When we speak publicly either for ourselves or our organizations, we need to do so with great authenticity, empathy and credibility. What we say demonstrates our values and contributes to the larger narrative. Getting it right can make a career; getting it wrong can send stock prices tumbling. Trust is the new currency of the workplace. It's won and lost, and you always have room for more. You can't be successful without it, especially in corporate communications. Changing The Narrative More professionals are taking notice of the importance of the quality of and credibility in communications. The Oxford-GlobeScan Global Corporate Affairs 2025 Survey Report reflects its increased importance. Whether you are speaking on behalf of an organization or for your personal brand, here's how to start changing the status quo and to bring trust back to the table. • If it seems too outrageous to be true, be skeptical. Question everything. Before sharing, check several reputable sources and use tools and platforms to help you identify potential blind spots and verify factuality and credibility. • Stop, look and listen. Stop and take a moment before you do anything. There is great power in the pause. Firing off one reactive and rushed comment on social media can sideline a career. Look closely at what you're posting and read it through before you even consider sharing. Even the majority of influencers don't verify what they share. Verify your claims and content with reputable sites. • If you spot misinformation, report it. Whether you choose to question or inform the person who posted it, or report misinformation to the online platform directly, you have an opportunity to help solve this challenge. The only way we bring trust and credibility back to our online conversations is by making small, incremental changes. Rather than ignoring the problem, we each need to become part of the solution. Nothing changes if nothing changes. The next time you read something that seems too good to be true, you know what to do. Forbes Communications Council is an invitation-only community for executives in successful public relations, media strategy, creative and advertising agencies. Do I qualify?

Meta faces backlash over ‘sensual' chatbot conversations with children
Meta faces backlash over ‘sensual' chatbot conversations with children

The Hill

timean hour ago

  • The Hill

Meta faces backlash over ‘sensual' chatbot conversations with children

Lawmakers on both sides of the aisle are seizing on new revelations about 'sensual' chatbot conversations Meta deemed acceptable for children, dragging the tech giant and its checkered past on children's safety back into the spotlight. Meta, the parent company of Facebook and Instagram, has long faced scrutiny over the impact of its social media platforms on children. As the company has expanded into artificial intelligence (AI) alongside the rest of the tech industry, it is grappling with both familiar and new, distinct problems. In an internal policy document obtained by Reuters, Meta featured examples of acceptable conversations between its AI chatbot and children, suggesting they could engage in 'conversations that are romantic or sensual' and describe them 'in terms that evidence their attractiveness' — examples Meta said were erroneous and have since been removed. Sen. Josh Hawley (R-Mo.) slammed the tech giant Thursday, suggesting the revelations were 'grounds for an immediate congressional investigation.' He followed up with a letter to Meta CEO Mark Zuckerberg on Friday, saying the Senate Judiciary Subcommittee on Crime and Counterterrorism was opening a probe into the company's generative AI products. 'It's unacceptable that these policies were advanced in the first place,' Hawley wrote. 'Meta must immediately preserve all relevant records and produce responsive documents so Congress can investigate these troubling practices.' Sen. Marsha Blackburn (R-Tenn.), who has long championed the Kids Online Safety Act (KOSA), pointed to the revelations as underscoring the need for such legislation. A spokesperson said the senator supports an investigation into the company. 'When it comes to protecting precious children online, Meta has failed miserably by every possible measure,' she said in a statement. 'Even worse, the company has turned a blind eye to the devastating consequences of how its platforms are designed. This report reaffirms why we need to pass the Kids Online Safety Act.' Democrats have also joined the backlash, with Sen. Brian Schatz (D-Hawaii) questioning how the chatbot guidance was approved. 'META Chat Bots that basically hit on kids – f— that,' he wrote on X. 'This is disgusting and evil. I cannot understand how anyone with a kid did anything other than freak out when someone said this idea out loud. My head is exploding knowing that multiple people approved this.' Sen. Ron Wyden (D-Ore.) suggested the incident shows Meta is a company 'morally and ethically off the rails.' 'It seems clear that Mark Zuckerberg rushed an unsafe chatbot to a mass market just to keep up with the competition, consequences for its users be damned,' he said. 'I've long said that Section 230 does not protect generative AI bots like this, which are entirely created by the company, not users,' the senator continued. 'Meta and Zuckerberg should be held fully responsible for any harm these bots cause.' Wyden's concerns underscore a key difference between the problems that Meta has previously encountered as a social media company and the issues that plague recent AI developments. Previous scandals involved content on Facebook and Instagram that was generated by users, clearly giving Meta cover under Section 230 — a portion of the Communications Decency Act that shields companies from liability for user-generated content. Social media has increasingly tested the limits of this law in recent years, as some argue major tech companies should be held responsible for harmful content on their platforms. Meta felt the severity of this backlash in 2021, when Facebook whistleblower Frances Haugen leaked a tranche of internal documents. She later testified before Congress, alleging the firm was aware its products were harming children and teens, but still sought to profit off their engagement. In 2024, Zuckerberg was hauled before lawmakers to discuss Meta's child safety policies, alongside the CEOs of TikTok, Discord, Snapchat and X. Following a contentious exchange with Hawley, Zuckerberg turned around in the hearing room to apologize to dozens of parents and activists. 'I'm sorry for everything you have all been through,' he said at the time. 'No one should go through the things that your families have suffered.' However, the emergence of AI tools, like chatbots, has created new challenges for tech companies, as they make decisions about how to train AI models and what limitations to put on chatbot responses. Some, like Wyden, have argued these tools fall outside the protections of Section 230. Parent advocates said the newly reported documents 'confirm our worst fears about AI chatbots and children's safety.' 'When a company's own policies explicitly allow bots to engage children in 'romantic or sensual' conversations, it's not an oversight, it's a system designed to normalize inappropriate interactions with minors,' Shelby Knox, campaign director for tech accountability and online safety at ParentsTogether, said in a statement. 'No child should ever be told by an AI that 'age is just a number' or be encouraged to lie to their parents about adult relationships,' she continued. 'Meta has created a digital grooming ground, and parents deserve answers about how this was allowed to happen.' Meta spokesperson Andy Stone said in a statement Thursday that the company has 'clear policies' that 'prohibit content that sexualizes children and sexualized role play between adults and minors.' Additional examples, notes, and annotations on its policies 'reflect teams grappling with different hypothetical scenarios,' he added, underscoring that those in question have been removed. The latest firestorm threatens to derail Zuckerberg's apparent efforts to alter his and Meta's public image to one that is more palatable to conservatives. He validated conservative censorship concerns last year, writing to the House Judiciary Committee that his company had been pressured by Biden officials in 2021 to censor content related to COVID-19 — frustrations he later reiterated during an appearance on Joe Rogan's podcast. Zuckerberg also overhauled Meta's content moderation policies in January, announcing plans to eliminate third-party fact-checking in favor of a community-based program in what he described as an effort to embrace free speech. The move earned praise from President Trump. Like other tech leaders, the Meta chief also courted Trump's favor as he returned to office, meeting with the president-elect at Mar-a-Lago and scoring a front-row seat to the inauguration.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store