
Thinkers50創新大師希娜.艾恩嘉來台,解鎖創新六步驟,助企業創新突圍 - TNL The News Lens 關鍵評論網
創新大師希娜.艾恩嘉來台演講,分享「破框思維」的創新六步驟,並與台灣產學界領袖對談,探討如何在VUCA時代有效創新。論壇資訊請參考官網。
面對快速變遷的全球局勢,台灣企業如何有效激發創新動能,突圍升級?
天下雜誌與玉山金控攜手舉辦的第十二屆「國際大師論壇」,將於2025年6月24日在台北國際會議中心隆重登場,邀請到Thinkers50創新大師希娜.艾恩嘉(Sheena Iyengar)重磅來台灣演講,分享有成效的創新方法。
艾恩嘉是哥倫比亞大學商學院教授,研究決策與創新30年,實務經驗豐富,顧問服務的企業包括Google、Deloitte、Bloomberg等跨領域頂尖國際企業,是備受推崇的創新管理大師。
告別天馬行空與無效激盪:一套務實的創新六步驟
面對當前變化快速(Volatility)、不確定性高(Uncertainty)、複雜(Complexity)、模糊(Ambiguity)的VUCA環境,舊有的思維模式往往難以應對。艾恩嘉的「破框思維」就是一套創造新選擇的有效創新方法。唯有透過能夠真正解決問題、創造價值的創新,企業才能在充滿挑戰的市場中成功突圍,開創可持續的發展道路。
艾恩嘉強調,真正的創新並非漫無目的的發想,而是一套有系統、可執行的流程。她的新書《破框思維的技術》提供一套務實創新方法,從「選定問題」到「第三視角驗證」分為六個步驟,協助企業精準鎖定問題核心,拆解複雜挑戰,並透過比較多方需求、搜尋多元解方組合等環節,有條理地創造出真正能解決問題、帶來價值的創新方案。這套方法引導決策者在眾多可能性中做出「有效選擇」,避免資源浪費在不切實際的點子上。
重量級對談激盪創新火花:產學界領袖共探破框之道
本次論壇除了艾恩嘉教授的主題演講外,更邀請台灣創新經驗豐富的產學專家與艾恩嘉教授共同激盪,包括玉山金控及玉山銀行董事長黃男州、Appier執行長暨共同創辦人游直翰與台科大資訊管理系特聘教授盧希鵬。
這場跨界對談將從不同角度切入,分享「破框思維」在企業實務、科技創新與學術研究上的應用與價值,勢必為與會者帶來更豐富多元的啟發。
關於國際大師論壇:與全球頂尖思想家面對面交流
自2012年起,天下雜誌與玉山金控共同舉辦的「國際大師論壇」持續為台灣企業引進全球領先的管理思維,助力企業提升競爭力,掌握未來發展的關鍵。 本次國際大師論壇聚焦於「破框思維」的核心——創造「有效選擇」的創新,邀請到艾恩嘉教授及其領銜的重量級與談人,正是希望幫助台灣企業管理者掌握在當前環境下至關重要的創新思維與實踐方法
論壇資訊 論壇時間:2025年6月24日 14:00-16:30
論壇地點:台北國際會議中心 三樓大會堂
論壇官網:https://mkt.cw.com.tw/2025master/
責任編輯:Joanna核稿編輯:楊士範
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
7 minutes ago
- Yahoo
CareRx Reports Voting Results from Annual General Meeting of Shareholders
Toronto, Ontario--(Newsfile Corp. - June 6, 2025) - CareRx Corporation (TSX: CRRX) ("CareRx" or the "Company"), Canada's leading provider of pharmacy services to seniors living and other congregate care communities, today announced that the nominees listed in the Management Information Circular for the Annual General Meeting of Shareholders (the "Meeting"), which was held on June 5, 2025, were elected as directors of the Company. In addition, all other resolutions put to vote at the Meeting were passed. The results of the vote for the election of directors at the Meeting are as follows: Nominees Votes For Kevin Dalton 31,081,208 (99.808%) Ralph Desando 31,099,751 (99.868%) Puneet Khanna 31,117,133 (99.924%) Jason Maguire 31,030,902 (99.647%) Bruce Moody 31,124,633 (99.948%) Maria Perrella 31,104,459 (99.883%) Jeff Watson 31,124,633 (99.948%) About CareRx Corporation CareRx is Canada's leading provider of pharmacy services to seniors living and other congregate care communities (long-term care homes, retirement homes, assisted living facilities and group homes). We are a national organization with a large network of pharmacy fulfillment centers strategically located across the country. This allows us to deliver medications in a timely and cost-effective manner and quickly respond to routine changes in medication management. We use best-in-class technology that automates the preparation and verification of multi-dose compliance packaging of medication, providing the highest levels of safety and adherence for individuals with complex medication regimens. We take an active role in working with our home operator partners to promote resident health, staff education, and medication system quality and efficiency. For additional information, please contact: Puneet KhannaPresident & Chief Executive OfficerCareRx Corporation416-927-8400 Suzanne BrandChief Financial OfficerCareRx Corporation416-927-8400 Neil WeberInvestor RelationsLodeRock To view the source version of this press release, please visit Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


CBS News
8 minutes ago
- CBS News
Colorado Safeway union workers agree to go on strike
The United Food Commercial Workers Local 7 said union members in Colorado have agreed to go on strike. This comes after a 2-day voting period where union members agreed to strike and after nine months of negotiations. The Local 7 union is accusing Albertsons, which owns Safeway, of unfair labor practices by not offering retroactive wage increases and cutting out the union when addressing employee grievances. The union also wants better pay and health care. CBS This comes as contract negotiations between the workers and King Soopers are happening at the same time. King Soopers and its workers have until June 8 to hammer out a new deal. "Both of these big groceries, they are just big bullies," said UFCW Local 7 President Kim Cordova. "They are just bullies at the table, and they are consolidating their power again to control the labor market for grocery store workers." Safeway released this statement: Safeway in Colorado remains committed to productive discussions with UFCW Local 7 and we have contract extensions in place while we do so. We respect the rights of workers to engage in collective bargaining and are negotiating in good faith to achieve an agreement. Our focus remains on providing exceptional service to our customers and fostering a positive working environment for our associates. All Safeway and Albertsons stores in Colorado are open and ready to continue serving our communities. The Local 7 union had to give 72 hours' notice before striking. The strike would affect Safeway stores in the Denver metro area. For now, Local 7 says workers will report as normal until a strike begins.


Forbes
8 minutes ago
- Forbes
AI Safety: Beyond AI Hype To Hybrid Intelligence
Autonomous electric cars with artificial intelligence self driving on metropolis road, 3d rendering The artificial intelligence revolution has reached a critical inflection point. While CEOs rush to deploy AI agents and boast about automation gains, a sobering reality check is emerging from boardrooms worldwide: ChatGPT 4o has 61% hallucinations according to simple QA developed by OpenAI, and even the most advanced AI systems fail basic reliability tests with alarming frequency. In a recent OpEd Dario Amodei, Anthropic's CEO, called for regulating AI arguing that voluntary safety measures are insufficient. Meanwhile, companies like Klarna — once poster children for AI-first customer service — are quietly reversing course on their AI agent-only approach, and rehiring human representatives. These aren't isolated incidents; they're the cusp of the iceberg signaling a fundamental misalignment between AI hype and AI reality. Today's AI safety landscape resembles a high-stakes experiment conducted without a safety net. Three competing governance models have emerged: the EU's risk-based regulatory approach, the US's innovation-first decentralized framework, and China's state-led centralized model. Yet none adequately addresses the core challenge facing business leaders: how to harness AI's transformative potential while managing its probabilistic unpredictability. The stakes couldn't be higher. Four out of five finance chiefs consider AI "mission-critical," while 71% of technology leaders don't trust their organizations to manage future AI risks effectively. This paradox — simultaneous dependence and distrust — creates a dangerous cognitive dissonance in corporate decision-making. AI hallucinations remain a persistent and worsening challenge in 2025, where artificial intelligence systems confidently generate false or misleading information that appears credible but lacks factual basis. Recent data reveals the scale of this problem: in just the first quarter of 2025, close to 13,000 AI-generated articles were removed from online platforms due to hallucinated content, while OpenAI's latest reasoning systems show hallucination rates reaching 33% for their o3 model and a staggering 48% for o4-mini when answering questions about public figures 48% error rate. The legal sector has been particularly affected, with more than 30 instances documented in May 2025 of lawyers using evidence that featured AI hallucinations. These fabrications span across domains, from journalism where ChatGPT falsely attributed 76% of quotes from popular journalism sites to healthcare where AI models might misdiagnose medical conditions. The phenomenon has become so problematic that 39% of AI-powered customer service bots were pulled back or reworked due to hallucination-related errors highlighting the urgent need for better verification systems and user awareness when interacting with AI-generated content. The future requires a more nuanced and holistic approach than the traditional either-or perspective. Forward-thinking organizations are abandoning the binary choice between human-only and AI-only approaches. Instead, they're embracing hybrid intelligence — deliberately designed human-machine collaboration that leverages each party's strengths while compensating for their respective weaknesses. Mixus, which went public in June 2025, exemplifies this shift. Rather than replacing humans with autonomous agents, their platform creates "colleague-in-the-loop" systems where AI handles routine processing while humans provide verification at critical decision points. This approach acknowledges a fundamental truth that the autonomous AI evangelists ignore: AI without natural intelligence is like building a Porsche and giving it to people without a driver's license. The autonomous vehicle industry learned this lesson the hard way. After years of promising fully self-driving cars, manufacturers now integrate human oversight into every system. The most successful deployments combine AI's computational power with human judgment, creating resilient systems that gracefully handle edge cases and unexpected scenarios. LawZero is another initiative in this direction, which seeks to promote scientist AI as a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. The underpinning belief is that AI should be cultivated as a global public good — developed and used safely towards human flourishing. It should be prosocial. While media attention focuses on AI hallucinations, business leaders face more immediate threats. Agency decay — the gradual erosion of human decision-making capabilities — poses a systemic risk as employees become overly dependent on AI recommendations. Mass persuasion capabilities enable sophisticated social engineering attacks. Market concentration in AI infrastructure creates single points of failure that could cripple entire industries. 47% of business leaders consider people using AI without proper oversight as one of the biggest fears in deploying AI in their organization. This fear is well-founded. Organizations implementing AI without proper governance frameworks risk not just operational failures, but legal liability, regulatory scrutiny, and reputational damage. Double literacy — investing in both human literacy (a holistic understanding of self and society) and algorithmic literacy — emerges as our most practical defense against AI-related risks. While waiting for coherent regulatory frameworks, organizations must build internal capabilities that enable safe AI deployment. Human literacy encompasses emotional intelligence, critical thinking, and ethical reasoning — uniquely human capabilities that become more valuable, not less, in an AI-augmented world. Algorithmic literacy involves understanding how AI systems work, their limitations, and appropriate use cases. Together, these competencies create the foundation for responsible AI adoption. In healthcare, hybrid systems have begun to revolutionize patient care by enabling practitioners to spend more time in direct patient care while AI handles routine tasks, improving care outcomes and reducing burnout. Some leaders in the business world are also embracing the hybrid paradigm, with companies incorporating AI agents as coworkers gaining competitive advantages in productivity, innovation, and cost efficiency. Practical Implementation: The A-Frame Approach If you are a business reader and leader, you can start building AI safety capabilities in-house, today using the A-Frame methodology – 4 interconnected practices that create accountability without stifling innovation: Awareness requires mapping both AI capabilities and failure modes across technical, social, and legal dimensions. You cannot manage what you don't understand. This means conducting thorough risk assessments, stress-testing systems before deployment, and maintaining current knowledge of AI limitations. Appreciation involves recognizing that AI accountability operates across multiple levels simultaneously. Individual users, organizational policies, regulatory requirements, and global standards all influence outcomes. Effective AI governance requires coordinated action across all these levels, not isolated interventions. Acceptance means acknowledging that zero-failure AI systems are mythical. Instead of pursuing impossible perfection, organizations should design for resilience — systems that degrade gracefully under stress and recover quickly from failures. This includes maintaining human oversight capabilities, establishing clear escalation procedures, and planning for AI system downtime. Accountability demands clear ownership structures defined before deployment, not after failure. This means assigning specific individuals responsibility for AI outcomes, establishing measurable performance indicators, and creating transparent decision-making processes that can withstand regulatory scrutiny. The AI safety challenge isn't primarily technical — it's organizational and cultural. Companies that successfully navigate this transition will combine ambitious AI adoption with disciplined safety practices. They'll invest in double literacy programs, design hybrid intelligence systems, and implement the A-Frame methodology as standard practice. The alternative — rushing headlong into AI deployment without adequate safeguards — risks not just individual corporate failure, but systemic damage to AI's long-term potential. As the autonomous vehicle industry learned, premature promises of full automation can trigger public backlash that delays beneficial innovation by years or decades. Business leaders face a choice: they can wait for regulators to impose AI safety requirements from above, or they can proactively build safety capabilities that become competitive advantages. Organizations that choose the latter approach — investing in hybrid intelligence and double literacy today — will be best positioned to thrive in an AI-integrated future while avoiding the pitfalls that inevitably accompany revolutionary technology transitions. The future belongs not to companies that achieve perfect AI automation, but to those that master the art of human-AI collaboration. In a world of probabilistic machines, our most valuable asset remains deterministic human judgment — enhanced, not replaced, by artificial intelligence.