
The Chinese have kept the yuan weak to deal with U.S. tariffs, says Michelle Caruso-Cabrera

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Google tweaks Google Play conditions following EU pressure
PARIS (Reuters) -Alphabet unit Google said on Tuesday it will make it easier for app developers to steer customers to channels other than Google after the European Union competition watchdog charged it with breaching EU regulations. In March, Google was hit with two charges of breaching the EU's landmark Digital Markets Act (DMA), which aims to rein in the power of Big Tech. Google has been in the European Commission's crosshairs since early 2024 over whether it restricts app developers from informing users about offers outside its app store Google Play and whether it favours its vertical search services such as Google Flights. Regulators have said Alphabet technically prevents app developers from freely steering consumers to other channels for better offers. They said a service fee charged by the company for facilitating the initial acquisition of a new customer by an app developer via Google Play goes beyond what is justified. In a blog post, Google said that following discussions with the European Commission, developers, and other experts, Google is updating certain terms. "While we still have concerns that these changes could expose Android users to harmful content and make the app experience worse, we're updating our External Offers Program for the EU with revised fees and more options for Android developers, following DMA discussions with the European Commission," EMEA Senior Competition Counsel Clare Kelly said. The company, which has been fined more than 8 billion euros ($9.3 billion) by the EU for various antitrust violations, risks fines of up to 10% of its global annual sales if found guilty of breaching the DMA. ($1 = 0.8580 euros)

Wall Street Journal
an hour ago
- Wall Street Journal
Google Offers to Tweak Play Store Terms to Stave Off EU Fine
Alphabet's Google has offered to change terms and conditions for developers using its Play Store in a bid to stave off a potential fine under European Union tech rules. Google said in a blog post on Tuesday that it would alter fees it charges developers for app downloads and make it easier to lead users to deals and offers outside of its own app store.
Yahoo
2 hours ago
- Yahoo
OPINION: Start secure, scale smarter: cybersecurity essentials for Canada's AI startups
Canada's AI ecosystem is booming. With more than 670 startups and over 30 generative AI companies, Canada ranks fourth globally in generative AI firms per capita, according to Deloitte. The market was on track to reach US$4.13 billion in 2024, fueled by fast-moving innovation hubs in Toronto, Montreal, Vancouver and Calgary. However, behind the momentum lies a growing blind spot: cybersecurity. For many AI founders, the race to ship products and secure funding can overshadow security concerns. This tradeoff may seem efficient now, but it creates long-term risks that can derail even the most promising ventures. Canadian startups can't afford to ignore cybersecurity Most founders are familiar with the Personal Information Protection and Electronic Documents Act (PIPEDA), which governs how Canadian organizations handle personal data. But fewer are actively aligning with emerging global standards such as the EU AI Act, which is already shaping how AI systems must be designed, secured and monitored worldwide. Even if your company does not operate in the EU today, your users or enterprise clients might. If you plan to scale internationally, security and ethics will become non-negotiable. The hidden risks of speed Modern AI products are often built using large language models (LLMs) like ChatGPT, Claude or open-source versions. Developers increasingly rely on tools like GitHub Copilot to generate code from natural language prompts. This trend is known as vibe coding. Vibe coding refers to the practice of using generative AI tools to accelerate software development, often without a deep understanding of the generated code. It enables rapid prototyping and gives non-technical team members access to development processes. However, the speed it enables comes with significant risks. These include insecure default settings, lack of input validation, poorly understood codebases, susceptibility to prompt injection and data leaks and training data that may contain vulnerabilities or bias. These issues can accumulate quickly. One incident such as a data breach, flawed output or regulatory violation can stall growth and erode user trust. Security best practices every Canadian AI startup should adopt Security should not be treated as an afterthought. It must be a fundamental part of product development. From day one, apply a security-first mindset when designing your AI systems. Conduct threat modeling to understand how your data flows, who interacts with it and where potential vulnerabilities exist. This includes identifying high-risk areas such as unsecured APIs, risky third-party dependencies and overly permissive access controls. Go beyond traditional application security by accounting for risks unique to AI systems. These include adversarial attacks, model extraction and unpredictable behavior in fine-tuned models. Building with security in mind from the start can prevent major issues later in your product's lifecycle. Your model is only as trustworthy as the data it is trained on. Poor data hygiene can embed toxic language, biased assumptions or sensitive information directly into your AI outputs. Avoid scraping random data from the web or relying on unverified synthetic content. Instead, establish a process to validate, sanitize and document your datasets, just as you would with source code. This ensures reproducibility, transparency and compliance with legal or ethical standards. Use scanning tools to detect anomalies, inappropriate language or bias indicators. Store dataset documentation in a way that makes it easy to audit when needed. High-quality data is foundational to safe and effective AI. Prompt injection is one of the most critical threats facing LLM-based applications today. These attacks manipulate inputs to trick models into behaving in unintended ways. This can include revealing internal logic, executing hidden instructions or leaking sensitive information. To prevent this, sanitize and validate user inputs rigorously. Avoid exposing system-level prompts to end users and implement safeguards for any feature that allows file uploads, command execution or plugin integrations. Conduct regular AI red teaming exercises to test how well your application defends against malicious or unpredictable input. Controlling what your model sees and how it responds is essential to maintaining the integrity of your product. AI-specific regulations are still evolving in Canada, but global frameworks already provide valuable guidance. The NIST AI Risk Management Framework helps teams assess and mitigate risks associated with deploying AI systems. ISO/IEC 42001 offers a governance model for responsible AI development, while the OWASP Top 10 for LLMs identifies the most common vulnerabilities found in AI-powered applications. Adopting these standards early allows your startup to build systems that meet enterprise-grade expectations, improve credibility with investors and prepare for future compliance obligations. The more opaque your AI model is, the harder it becomes to protect, debug or improve. Explainability is essential for building trust, ensuring accountability and meeting regulatory requirements. Use tools like LIME or SHAP to help unpack how your models make decisions. Maintain detailed logs of model inputs, outputs, user interactions and changes over time. Strong traceability supports both incident response and compliance with privacy laws such as PIPEDA and GDPR. It also provides a foundation for transparency when communicating with customers or responding to audits. When users understand your model, they are more likely to trust and adopt your product. Closing thoughts Security should not be treated as an after thought. The earlier you integrate cybersecurity into your startup's culture and development lifecycle, the more resilient, trusted and scalable your product becomes. By embracing secure-by-design principles now, your team can avoid costly rework, reduce risk exposure and build the kind of AI that earns market respect and stands the test of time. Canadian AI founders have a unique opportunity in leading innovation and setting the bar for ethical, secure AI that puts users and trust at the center of technology. Kelly Onu is a cybersecurity consultant at EY with eight years of experience and a passion for building secure systems across various industries. She is an active community advocate, sharing thought leadership and mentoring emerging professionals through outreach and inclusion initiatives. This section is powered by Revenue Dynamix. Revenue Dynamix provides innovative marketing solutions designed to help IT professionals and businesses thrive in the Canadian market, offering insights and strategies that drive growth and success across the enterprise IT spectrum. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data