
How ‘Hitman' Developer Became One of the Largest Independent Game Companies
Eight years ago, IO Interactive was bleeding money. Now it's one of the biggest privately owned video-game companies in the world.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
an hour ago
- Forbes
Multimodal AI: A Powerful Leap With Complex Trade-Offs
Artificial intelligence is evolving into a new phase that more closely resembles human perception and interaction with the world. Multimodal AI enables systems to process and generate information across various formats such as text, images, audio, and video. This advancement promises to revolutionize how businesses operate, innovate, and compete. Unlike earlier AI models, which were limited to a single data type, multimodal models are designed to integrate multiple streams of information, much like humans do. We rarely make decisions based on a single input; we listen, read, observe, and intuit. Now, machines are beginning to emulate this process. Many experts advocate for training models in a multimodal manner rather than focusing on individual media types. This leap in capability offers strategic advantages, such as more intuitive customer interactions, smarter automation, and holistic decision-making. Multimodal has already become a necessity in many simple use cases today. One example of this is the ability to comprehend presentations which have images, text and more. However, responsibility will be critical, as multimodal AI raises new questions about data integration, bias, security, and the true cost of implementation. Multimodal AI allows businesses to unify previously isolated data sources. Imagine a customer support platform that simultaneously processes a transcript, a screenshot, and a tone of voice to resolve an issue. Or consider a factory system that combines visual feeds, sensor data, and technician logs to predict equipment failures before they occur. These are not just efficiency gains; they represent new modes of value creation. In sectors like healthcare, logistics, and retail, multimodal systems can enable more accurate diagnoses, better inventory forecasting, and deeply personalized experiences. In addition, and perhaps more importantly, the ability of AI to engage with us in a multimodal way is the future. Talking to an LLM is easier than writing and then reading through responses. Imagine systems that can engage with us leveraging a combination of voice, videos, and infographics to explain concepts. This will fundamentally change how we engage with the digital ecosystem today and perhaps a big reason why many are starting to think that the AI of tomorrow will need something different than just laptops and screens. This is why leading tech firms like Google, Meta, Apple, and Microsoft are heavily investing in building native multimodal models rather than piecing together unimodal components. Despite its potential, implementing multimodal AI is complex. One of the biggest challenges is data integration, which involves more than just technical plumbing. Organizations need to feed integrated data flows into models, which is not an easy task. Consider a large organization with a wealth of enterprise data: documents, meetings, images, chats, and code. Is this information connected in a way that enables multimodal reasoning? Or think about a manufacturing plant: how can visual inspections, temperature sensors, and work orders be meaningfully fused in real time? That's not to mention the computing power multimodal AI require, which Sam Altman referenced in a viral tweet earlier this year. But success requires more than engineering; it requires clarity about which data combinations unlock real business outcomes. Without this clarity, integration efforts risk becoming costly experiments with unclear returns on investment. Multimodal systems can also amplify biases inherent in each data type. Visual datasets, such as those used in computer vision, may not equally represent all demographic groups. For example, a dataset might contain more images of people from certain ethnicities, age groups, or genders, leading to a skewed representation. Asking a LLM to generate an image of a person drawing with their left hand remains challenging – leading hypothesis is that most pictures available to train are right-handed individuals. Language data, such as text from books, articles, social media, and other sources, is created by humans who are influenced by their own social and cultural backgrounds. As a result, the language used can reflect the biases, stereotypes, and norms prevalent in those societies. When these inputs interact, the effects can compound unpredictably. A system trained on images from a narrow population may behave differently when paired with demographic metadata intended to broaden its utility. The result could be a system that appears more intelligent but is actually more brittle or biased. Business leaders must evolve their auditing and governance of AI systems to account for cross-modal risks, not just isolated flaws in training data. Additionally, multimodal systems raise the stakes for data security and privacy. Combining more data types creates a more specific and personal profile. Text alone may reveal what someone said, audio adds how they said it, and visuals show who they are. Adding biometric or behavioral data creates a detailed, persistent fingerprint. This has significant implications for customer trust, regulatory exposure, and cybersecurity strategy. Multimodal systems must be designed for resilience and accountability from the ground up, not just performance. Multimodal AI is not just a technical innovation; it represents a strategic shift that aligns artificial intelligence more closely with human cognition and real business contexts. It offers powerful new capabilities but demands a higher standard of data integration, fairness, and security. For executives, the key question is not just, "Can we build this?" but "Should we, and how?" What use case justifies the complexity? What risks are compounded when data types converge? How will success be measured, not just in performance but in trust? The promise is real, but like any frontier, it demands responsible exploration.


Forbes
2 hours ago
- Forbes
AI Forces Leaders To Rediscover The Missing Humanistic Component
In boardrooms across the world, a familiar scene unfolds daily: executives hunched over spreadsheets, dissecting metrics, optimizing processes, and chasing the next quarter's numbers. It's a dance of efficiency we've perfected over decades, one that has undoubtedly driven remarkable progress. Yet beneath this symphony of optimization lies a quieter truth: we've been playing only half the song. The other half, the humanistic component that transforms good organizations into great ones and sustainable success into legacy, has been relegated to the background, often dismissed as too soft, too unmeasurable, or too idealistic for serious business consideration. But as artificial intelligence reshapes our landscape rapidly, we're being handed an unexpected gift: the opportunity to finally address this imbalance and conduct the full orchestra of human potential. Our modern obsession with quantifiable outputs has created an optimization trap. According to Stanford's 2024 AI Index Report, AI capabilities continue to advance rapidly across multiple benchmarks, yet many leaders remain focused primarily on efficiency gains and cost reductions. This narrow focus has led us to optimize for what we can measure while neglecting the underlying melody that makes organizations truly thrive, the human elements of trust, creativity, empathy, and meaning-making. The obsession with key performance indicators has led to a counterproductive switch from 'measure what you treasure' to 'treasure what you measure'. Consider the difference between a technically proficient musician and a master performer. Both can play the notes correctly, but only the master understands that music lives in the spaces between the notes, in the emotional resonance that transforms sound into experience. Similarly, great leadership isn't just about hitting performance targets; it's about creating conditions where people feel valued, inspired, and connected to something larger than themselves. Organizations that have thrived over decades understand this intuitively. They recognized that while systems and processes provide the framework, it's the humanistic elements that provide the soul. These elements, emotional intelligence, moral courage, authentic communication, and genuine care for stakeholder wellbeing, create the conditions for innovation, resilience, and sustainable growth. It is a win-win-win-win – for the humans we are, the institutions we belong to, the country we are part of and the planet we depend on. The current AI revolution, rather than threatening our humanity, is offering us an opportunity to confront what it truly means to be human and humane. As machines become increasingly capable of handling routine cognitive tasks, we're forced to grapple with fundamental questions: What uniquely human capabilities should we cultivate? How do we create value that transcends mere efficiency? What does authentic leadership look like in an age of artificial intelligence? Hybrid intelligence combines the best of AI and humans, leading to more sustainable, creative, and trustworthy results." This isn't about humans versus machines, but about recognizing that our greatest potential lies in thoughtful collaboration, what we might call the complementarity of natural and artificial intelligence. The key insight here is that AI's growing capabilities don't diminish human value; they amplify our need to focus on what makes us irreplaceably human. While AI excels at pattern recognition, data processing, and optimization, humans bring contextual understanding, ethical reasoning, creative synthesis, and the ability to navigate ambiguity with wisdom and compassion. This realization opens the door to prosocial AI, systems that are intentionally designed, trained, tested, and targeted to bring out the best in people and organizations while serving the broader good of society and planet. Rather than simply automating existing processes, prosocial AI amplifies human potential and creates conditions for collective flourishing. AI systems can be designed to promote positive social outcomes rather than simply maximizing engagement or profit. But these positive outcomes will not happen as an automatic derivative of ever more sophisticated technology. Ultimately the technology of tomorrow will be as good or ugly as the humans of today. Garbage in, garbage out or Values in, values out. We have a choice, but we need to make it. The vision of prosocial AI aligns with a deeper aspiration: creating a world where everyone has a fair chance to fulfill their inherent potential to flourish. This isn't utopian thinking; it's practical wisdom. Organizations that prioritize human flourishing alongside performance metrics consistently outperform those focused solely on short-term gains. Making this happen requires humanistic leadership. Designing the future requires a more nuanced vision of life and living. Beyond discussions of AI and the future of work, it is time to envision HI and the future of life itself. The future belongs not to humans who excel with artificial intelligence alone, nor to purely human-driven approaches. The catalysts of positive social change will be those who master hybrid intelligence—HI—the dynamic interplay between natural intelligence – NI – and artificial intelligence – AI. It is time to move beyond either-or equations. Think of it like learning to dance with a partner who has completely different rhythms and strengths. The magic doesn't happen when one leads and the other follows, but when both contribute their unique gifts to create something neither could achieve alone. Human intelligence carries the weight of millennia — our capacity for moral reasoning shaped by countless generations, our ability to read between the lines of what isn't said, our gift for creative leaps that defy logical progression. We carry the wisdom of bodies that have felt joy and heartbreak, minds that dream in metaphors, and spirits that yearn for meaning beyond mere efficiency. Artificial intelligence, meanwhile, brings a different kind of power: the ability to hold thousands of variables in perfect balance, to spot patterns across vast landscapes of information, to maintain unwavering consistency in analysis. Where we bring depth, AI brings breadth. Where we offer intuition, AI provides systematic thoroughness. What the research doesn't capture: the qualitative shift that happens when organizations stop asking "How can AI do this job?" and start asking "How can AI help humans do this job better?" And maybe even, "how can AI support humans to be better?" The difference isn't semantic — it's foundational. Effective hybrid intelligence emerges not from clever technical integration, but from a fundamental commitment to amplifying what makes us most human while leveraging what makes AI most useful. This means designing systems that create more interesting work, not less. More meaningful connections, not fewer. More opportunities for human creativity and judgment, not replacement of them. To develop the kind of Hybrid Intelligence that serves both performance and humanity, leaders need a practical framework. The A-Frame approach builds on four foundational elements: Awareness: Developing deep understanding of both human and artificial intelligence capabilities and limitations. This means cultivating double literacy to encompass human literacy (a holistic understanding of self, others, and social systems) combined with algorithmic literacy (understanding what AI is, how it works, and where it falls short). Appreciation: Recognizing and valuing the unique contributions that both humans and AI bring to complex challenges. This involves moving beyond the binary thinking that sees AI as either savior or threat, instead appreciating the complementary nature of human and artificial capabilities. Acceptance: Acknowledging the current realities and limitations of both human and artificial intelligence without falling into either technophobia or techno-utopianism. This includes accepting that neither humans nor AI are perfect, and that their combination requires ongoing attention and refinement. Accountability: Taking responsibility for the outcomes of NI-AI collaboration, ensuring that the integration serves ethical purposes and contributes to human flourishing. This means establishing clear governance structures, feedback mechanisms, and guidelines for prosocial AI deployment. The A-Frame approach rests on developing double literacy, a combination of human literacy and algorithmic literacy that enables leaders to navigate the hybrid intelligence landscape effectively. Human literacy involves deep self-awareness, emotional intelligence, understanding of social dynamics, and appreciation for the full spectrum of human experience. It means recognizing that humans are not merely rational actors but complex beings driven by emotions, values, relationships, and meaning-making. Leaders with strong human literacy understand how to create psychological safety, foster authentic relationships, and inspire others toward shared purposes. Algorithmic literacy, meanwhile, involves understanding AI's capabilities, limitations, and appropriate applications. This doesn't require becoming a technical expert, but it does mean understanding how AI systems learn, what kinds of biases they might embed, and where human judgment remains essential. Leaders with strong algorithmic literacy can make informed decisions about when and how to deploy AI tools while maintaining appropriate oversight and accountability. Together, these literacies enable leaders to orchestrate human-AI collaboration that amplifies the best of both worlds while mitigating risks and unintended consequences. Developing hybrid intelligence isn't an abstract concept, it requires concrete action. Organizations can begin by conducting honest audits of their current approach to AI integration, asking not just whether AI is improving efficiency, but whether it's enhancing human capabilities and contributing to meaningful outcomes. This might involve redesigning workflows to leverage AI's analytical strengths while preserving spaces for human creativity and relationship-building. It could mean establishing cross-functional teams that include both technical specialists and humanities-trained professionals who can provide essential perspectives on AI's human impact. Most importantly, it requires leaders who model the integration of analytical rigor with humanistic wisdom, executives who can read both spreadsheets and the emotional temperature of their organizations, who understand both market dynamics and human psychology. The AI revolution presents us with a unique historical moment, an invitation to step back from our relentless pursuit of optimization and remember what we've been optimizing for. It challenges us to embrace both the power of artificial intelligence and the irreplaceable value of human wisdom, creativity, and compassion. The choice before us isn't between natural and artificial intelligence, but between narrow optimization and holistic flourishing. By embracing hybrid intelligence through the A-Frame approach and developing double literacy, we can create organizations and systems that don't just perform better, but contribute to a world where technology serves humanity's highest aspirations. The missing melody hasn't been lost, it's been waiting for us to remember how to play it. Now, with AI as our accompanist rather than our replacement, we have the opportunity to conduct a humanistic symphony that is in sync with human potential.
Yahoo
2 hours ago
- Yahoo
Can GameStop Stock Rise From the Ashes?
CEO Ryan Cohen has helped stabilize GameStop's business. Meanwhile, the company has sold equity to build a large cash hoard and is buying up Bitcoin. What Cohen does with GameStop's cash will determine whether he can transform the company. 10 stocks we like better than GameStop › Under the stewardship of CEO Ryan Cohen, GameStop (NYSE: GME) is looking to rise from the ashes. Cohen himself recently said the company was a "piece of crap" when he took over in the fall of 2023. At the time, the company was losing money and facing structural challenges. At its core, GameStop is still a global retailer of new and pre-owned video games and video game hardware, operating thousands of stores across North America, Europe, and Australia. The video game industry, meanwhile, has seen a shift from physical games to digital downloads and subscription models. There also hasn't been a major new gaming console release to drum up consumer demand since 2020. The bright spot for GameStop has been its foray into the collectibles market, which it started to pursue around 2016. While the collectibles business has had its ups and downs over the years, the company seems to have found its niche with the buying and selling of graded trading cards for popular games such as Pokémon, Yu-Gi-Oh!, and Magic: The Gathering. Last fall, it also became an authorized dealer for grading company PSA, allowing collectors to drop off cards at GameStop locations to be sent in for grading. While GameStop continued to see large declines in its video game and hardware sales last quarter, collectibles was a bright spot. Overall sales sank 17% in its fiscal 2025 Q1 to $732.4 million, but collectible sales soared 55% to $211.5 million. Upon taking over GameStop, Cohen called for "extreme frugality," saying every expense must be examined and all waste eliminated. That frugality was also on display last quarter as the company was able to flip its year-ago loss of $32.3 million into a $44.8 million profit, despite the overall decline in sales. GameStop also generated $189.6 million in free cash flow in the quarter and ended the period with $6.4 billion in cash against nearly $1.5 billion in debt in the form of a 0% convertible note. And here is where the story gets really interesting. During its meme stock days, GameStop was able to take advantage of its high stock price and issue equity at attractive prices to raise a lot of cash. Though its share price has come down from the levels seen in 2021, the stock has remained elevated enough that Cohen decided to issue additional equity last year through at-the-market (ATM) offerings, contributing to its current cash holdings. That much cash gives Cohen, who also owns just over 8% of the company through his RC Ventures investment vehicle, the means to transform GameStop. While Cohen has helped stabilize the GameStop business and even made it profitable, that is not where its potential lies. At the end of the day, there are just too many structural headwinds working against it for it to become a strong and growing retail business. With over $6 billion in cash, though, Cohen has options. His first step was using some of the company's cash hoard to buy 4,710 Bitcoin between May 3, 2025 and June 10, 2025. At current prices, that would be worth around $518 million. Cohen said the company added Bitcoin to "hedge against global currency devaluation and systemic risk." However, just buying Bitcoin is not going to transform the company. If that is the company's strategy, it would just be better for investors to buy Bitcoin themselves. That said, Cohen hasn't indicated that this is his long-term plan, and Bitcoin could just be a place to store some of GameStop's cash for now. A potential move for Cohen would be to either acquire a fast-growing business or turn GameStop into a holding company like Berkshire Hathaway. People often forget that Berkshire was a struggling textile company that Warren Buffett bought and eventually had to shut down before it became the massive conglomerate it is today. Cohen has proven he can successfully run and turn around a struggling business, but now, he needs to expand beyond GameStop's legacy model. With an enterprise value of over $8.5 billion, GameStop's current value is worth way more than the intrinsic value of its retail business. Note that enterprise value takes into consideration its sizable cash position. That's why one Wall Street analyst recently said the company is relying on the "greater fools" theory, meaning that the only way the stock price would go up from here is if someone else was dumb enough to buy it at a higher price. That's a bit harsh, and given GameStop's cash hoard, Cohen does have the resources necessary to transform the company. However, I would agree that investors shouldn't be buying the stock at these overvalued levels based only on the hope that he comes up with a viable turnaround plan. Before you buy stock in GameStop, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and GameStop wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $653,702!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $870,207!* Now, it's worth noting Stock Advisor's total average return is 988% — a market-crushing outperformance compared to 172% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 9, 2025 Geoffrey Seiler has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Berkshire Hathaway. The Motley Fool has a disclosure policy. Can GameStop Stock Rise From the Ashes? was originally published by The Motley Fool