
A Beloved Canadian Chocolate Bar Disappears From the Market
Neilson Jersey Milk, the signature offering of a company that once dominated the chocolate business in Canada, has been pulled from the market. Many Canadians still remember the illustrations of the bars, with their gold and white packaging, that appeared in the ocean below Nova Scotia on maps of Canada that Neilson sent at no charge to schools — a kind of corporate sponsorship that's not likely to be permitted today.
Neilson's candy division passed out of Canadian ownership under the Weston family — which owns Canada's largest supermarket and drugstore chains and has been the object of public scorn for some of its business decisions — in 1996 and then changed hands several times. It's now part of Mondelez International, the American corporate giant that comprises Nabisco along with international brands like Cadbury and Toblerone. Mondelez sold $36 billion worth of snack foods last year.
The company did not respond to my questions about why Jersey Milk was no longer for sale or when production in what was originally Neilson's chocolate factory in Toronto had stopped.
But a spokeswoman told The Canadian Press news agency that Jersey Milk had been dropped because the company found that shoppers preferred other chocolate bars in its catalog, like Cadbury's Dairy Milk, which Mondelez makes in the same Toronto plant.
Want all of The Times? Subscribe.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
18 minutes ago
- Forbes
Why Ethical AI Is Key To Building Trust
Lee Blakemore, CEO of Introhive, explores how organizations can balance AI innovation with integrity to strengthen client relationships. AI is transforming how businesses operate, driving new efficiencies and insights. But in the professional world, where our client relationships and trust are paramount, the question isn't just what AI can do, it's also how responsibly we use it, particularly when it affects both human connections and how we manage client data. When it comes to relationship building, AI also raises important concerns around data privacy, security and governance. It's not just about what AI creates; it's also about what data AI consumes. Ethical use of AI depends on a strong commitment to protecting client information through robust policies, secure infrastructure and clear governance. Without that foundation, it's not just trust that's at risk, it's the relationships that make business possible. How AI Can Strengthen Client Relationships AI can be a powerful ally in building stronger client relationships. By managing client information, tracking communication history and automating repetitive tasks, it frees up time for professionals to focus on the work that truly deepens connections: listening, advising and solving complex problems. But with that opportunity comes an ethical responsibility: AI should never reduce relationships to transactions. When it's used in ways that feel impersonal or indiscriminate, it risks treating clients as data points rather than people. I've seen a growing number of automated LinkedIn messages that come across as overly polished or impersonal. They might be grammatically flawless, but they don't sound human or genuinely interested. Relationship-building depends on empathy, context and intent, all of which are areas where AI falls short. Used thoughtfully, AI has the potential to amplify the human touch, but it means deploying it in ways that support more meaningful engagement, not shortcuts that undermine sincerity. Strengthening Governance: The Cornerstone Of Responsible AI As AI systems become more embedded in how organizations operate, it's not enough to focus only on AI capabilities. Many companies still treat compliance as an exercise (a series of audits, checklists or security reviews) that happens in isolation. But real governance needs to be proactive, continuous and cross-functional. It's about embedding ethical and regulatory considerations into the day-to-day. Not just responding to issues when they appear, but designing processes that prevent them in the first place and fostering a culture where governance is a shared organizational mindset. One common challenge is that governance structures often become fragmented as organizations grow. With data flowing through more platforms, partners and tools than ever, it's easy to lose track of how that data is being accessed, shared and used, especially by AI systems trained on sensitive or proprietary information. As a result, a critical challenge for organizations is knowing whether AI is secure or accurate, as well as having a firm handle on how data powers those systems. Being able to stand behind the decisions AI is helping to make means understanding where the data comes from, how models are trained and what kinds of risks they introduce. Strong AI governance means putting the right guardrails in place. It means maintaining clear policies for data usage, validation checks for model performance and ethical reviews of how AI interacts with clients and staff. It also calls for a higher level of scrutiny for third-party technologies and partners, since responsibility doesn't stop at the vendor contract. When designing our platform, we could have pursued many paths, some of which would have provided a quicker path to AI-driven insights into relationships, but after discussions with our clients, we chose not to use third-party APIs like ChatGPT so as to keep our customer data within our data centers and bring the LLM to the data instead. This took longer, but was more in line with our data residency and privacy commitments to our clients. At the end of the day, AI is a powerful enabler, but it's how we use it that defines its impact. In a business context, the future of AI will be shaped not just by innovation, but by our ability to use it responsibly, transparently and in alignment with our core values. Practical Steps To Preserving Authenticity And Protecting Data While Using AI When communication lacks authenticity, or when data isn't handled with intentionality and care, it places an organization's relationships and reputation at risk. Preserving authenticity and protecting data aren't separate objectives; they're twin priorities for any organization using AI responsibly. Here are practical steps to ensure your organization upholds both authenticity and data security in its AI strategy: Ensure AI is used only for support tasks like data management or client insights, while personalized communication remains human-driven. Define and enforce clear boundaries to prevent over-reliance on automation. Develop strict policies around how client data is collected, stored, accessed and used by AI systems. Ensure compliance with data privacy regulations and industry standards. Require that AI outputs are edited by humans to add personalization and context before being shared externally, ensuring both the message and the data usage are thoughtful and appropriate. Leverage AI for analyzing data and uncovering touchpoints, but always have a person follow up. For example, AI can highlight clients who haven't been contacted recently, but the outreach should be human-driven. Form a team to review how AI is used in customer interactions and ensure it aligns with company values. This team can make decisions about when and where AI can be ethically deployed. Ensure that clients always know how to reach a human representative quickly, even if AI is being used for initial communications or data handling. Invest in security infrastructure to ensure client data is protected from unauthorized access and breaches. Ethical AI begins with secure AI. As we integrate AI into how we work, the priority isn't just innovation; it's making sure that technology strengthens the connections that matter. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?


Bloomberg
18 minutes ago
- Bloomberg
Dye & Durham's Ex-CEO Urges Engine Capital Founder to Quit Board
A major shareholder of Dye & Durham Ltd. ramped up the pressure on two directors to resign, alleging the Canadian technology company has failures in its internal controls and governance practices. Plantro Ltd., a company controlled by former Chief Executive Officer Matt Proud, said in a news release that it's aware of a whistleblower complaint alleging that management was pressured to adopt 'aggressive accounting practices,' though it provided few specifics.
Yahoo
23 minutes ago
- Yahoo
China Clears Synopsys' $35 Billion Ansys Buyout in US Win
(Bloomberg) -- Synopsys Inc. has secured China's approval to buy out Ansys Inc. for $35 billion, a major win for a company regarded as key to helping sustain US dominance of certain aspects of semiconductor technology. Why Did Cars Get So Hard to See Out Of? How German Cities Are Rethinking Women's Safety — With Taxis Advocates Fear US Agents Are Using 'Wellness Checks' on Children as a Prelude to Arrests The State Administration for Market Regulation gave the acquisition a green light, with certain conditions, the agency said in a statement. Among other things, the Chinese watchdog mandated that Synopsys cannot reject requests from customers to renew their contracts. Washington this year briefly considered limiting Synopsys and its rivals from dealing with Chinese clients on the grounds of national security. Synopsys and Cadence Design Systems Inc. — the two American firms that dominate the global market for software tools used to design chips — got drawn into the Washington-Beijing trade war this year. The US imposed a licensing requirement that would've limited exports of their products, part of its response to Beijing's limits on rare earths, before abruptly lifting that mandate weeks later. Following Beijing's decision, Synopsys has cleared one of the last major hurdles to a deal to intended to shore up its market position. The buyout, announced in early 2024, was already approved by European and US authorities. In June, reports emerged that Chinese officials were delaying it in part because of escalating tensions over Washington's chip sanctions. Shares of California-based Synopsys rose as much as 3.5% after markets opened in New York on Monday. Pennsylvania-based Ansys jumped as much as 5.6% US companies seeking Chinese antitrust approval — particularly for deals in the tech sector — are often caught in the middle of geopolitical disputes between the countries. Although neither Synopsys nor Ansys is based in China, the two companies needed Beijing's sign off because China is one of the world's largest semiconductor markets. In 2018, US-based Qualcomm Inc. scrapped a $44 billion bid for Dutch chipmaker NXP Semiconductors NV after failing to secure a nod in time. As recently as 2023, Intel Corp. abandoned its proposed $5.4 billion acquisition of Tower Semiconductor Ltd. for the same reason. Broadcom Inc.'s $61 billion merger with software maker VMware Inc. eventually went through, although investors remained on edge throughout the process due to speculation that China would hold up the deal. (Updates with Monday trading in the sixth paragraph.) 'Our Goal Is to Get Their Money': Inside a Firm Charged With Scamming Writers for Millions Trump's Cuts Are Making Federal Data Disappear Thailand's Changing Cannabis Rules Leave Farmers in a Tough Spot Trade War? No Problem—If You Run a Trade School 'The Turbulence Is Brutal': Four Shark Tank Businesses on Tariffs ©2025 Bloomberg L.P. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data