Latest news with #GDPR


TECHx
2 hours ago
- Business
- TECHx
Can Ethical AI Be More Than a Talking Point?
Home » Editor's pick » Can Ethical AI Be More Than a Talking Point? Ethical AI is moving from talk to action as global laws, pledges, and accountability measures reshape how technology is built and deployed. AI is everywhere in 2025. It writes, designs, predicts, diagnoses, recommends, and increasingly, governs. From smart cities to courtrooms, its decisions are shaping our lives. But as AI grows more powerful, one question gets louder: Are we building it responsibly? Or are we just saying the right things? This month, the European Union made headlines with the passage of the AI Act, the first major attempt to regulate AI at scale. This sweeping law bans certain uses of AI, such as real-time facial recognition in public spaces and social scoring systems. It also imposes strict rules on high-risk applications like biometric surveillance, recruitment tools, and credit scoring. Why does this matter? Because it signals that AI governance is moving from voluntary ethics to enforceable law. The EU has set a precedent others may follow, much like it did with GDPR for data privacy. But here's the catch: regulation is only as effective as its enforcement. Without clear oversight and penalties, even the best laws can fall short. Europe's AI Act is a strong start, but the world is watching how it will be applied. Across the Atlantic, the United States is facing growing pressure to catch up. In May 2025, Congress held a new round of hearings with major AI players like OpenAI, Meta, Google DeepMind, and Anthropic. Lawmakers are calling for clear standards and transparency. Several of these companies have signed voluntary AI safety pledges, promising to develop systems responsibly. Meanwhile, South Korea is exploring a different path. Officials are developing an AI Ethics Certification, a system that would allow companies to prove that their models are fair, transparent, and safe. This is a smart move. Turning ethics into something measurable and certifiable could help bridge the gap between values and verification. However, the success of this initiative depends on how independent, transparent, and rigorous the certification process is. Principles Are Easy. Proof Is Hard. It's worth noting that almost every major AI company today has published a set of ethical principles. Words like trust , safety , accountability , and fairness appear prominently in blog posts and mission statements. But dig deeper and you'll find the real challenge: How are these principles enforced internally? Are external audits allowed? Are impact assessments made public? Is there a clear process to test and mitigate bias? When AI Ethics Fails We've already seen what happens when AI is built without enough attention to fairness or inclusivity. In 2023, a widely used hospital AI system in the U.S. was found to recommend fewer treatment options to Black patients. The cause? Biased training data that didn't account for structural inequalities in healthcare. In 2024, generative AI tools sparked criticism for gender and racial bias. When users searched for terms like 'CEO' or 'doctor,' the images generated were overwhelmingly of white men, despite the global diversity of those professions. These are not one-off glitches. They are symptoms of a deeper issue: AI systems trained on biased data will replicate, and even amplify, that bias at scale. That's why ethics can't be a box to check after a product launches. It must be embedded from the start. A New Ethical Frontier: The UAE Leads in the Middle East Encouragingly, ethical AI leadership is emerging from regions not traditionally known for tech regulation. The United Arab Emirates is one of them. The UAE's National AI Strategy 2031 places a strong emphasis on fairness, transparency, and inclusivity. This isn't just talk. Institutions like the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) are actively training a new generation of AI researchers with governance and ethics embedded in their education. This is a critical development. It shows that countries outside the usual power centers, like the U.S. and EU, can shape global norms. The UAE isn't just importing AI innovation; it's helping design how AI should be governed. Platforms for Global Dialogue Major events like AI Everything and GITEX GLOBAL, hosted in Dubai, are also evolving. They're no longer just product showcases. They now bring together global experts, policymakers, and ethicists to discuss responsible AI practices, risks, and solutions. These events are important, not only because they give emerging markets a voice in the AI ethics debate, but because they encourage cross-border collaboration. And that's exactly what AI governance needs. Why? Because AI systems don't stop at national borders. Facial recognition, large language models, predictive analytics, they all operate across regions. If we don't align on ethics globally, we risk creating fragmented systems with uneven protections. What Needs to Happen Now It's clear that we're moving in the right direction, but not fast enough. What's missing is the bridge between principles and practice. We need: Not just values, but verification. Not just pledges, but clear policies. Not just intentions, but independent audits. Ethics should be baked into the AI lifecycle, from design to deployment. That means testing for bias before the model goes live, ensuring transparency in how decisions are made, and creating clear channels for redress when systems fail. AI governance shouldn't slow innovation. It should guide it. The pace of AI innovation is staggering. Every week brings new tools, new capabilities, and new risks. But alongside that speed is an opportunity: to define the kind of AI future we want. In 2025, ethical AI should not be a trending topic or a marketing slogan. It must be the foundation, the baseline. Because when technology makes decisions about people, those decisions must reflect human values, not just machine logic. By Rabab Zehra, Executive Editor at TECHx.


Daily Record
17 hours ago
- Daily Record
Bar manager wins £10k after she was sacked for checking CCTV over claims of 'spiking'
Sophie Marsh was sacked on the spot for CCTV footage after receiving a report of a suspected spiking incident. A bar manager has won nearly £10,000 in compensation after she was unfairly sacked for checking CCTV footage for evidence of a "spiking" incident. Sophie Marsh, from Mauchline in East Ayrshire, worked at S altcoats Labour Club for two years before she was suddenly dismissed on October 11 last year. Bosses sacked the 40-year-old on the spot after claiming she had breached GDPR when she checked the footage at the request of a female customer who suspected her drink had been tampered with during a karaoke night. Sophie was dismissed without any investigation or disciplinary action and the case was put before an employment tribunal in Glasgow last month. It ruled that Sophie was not afforded the right to an investigation or an opportunity to put her side across before a dismissal decision was made. She was awarded a total of £9,500 for the unfair and wrongful sacking. Speaking of the outcome, Sophie told the Record: "I wasn't checking the CCTV footage for my own leisure, I was checking it to see if criminal activity had taken place within the bar. "A customer suspected she had been spiked and as bar manager, I had a duty of care to her as my customer. "In this case, there was no criminality but what if there had been and I was sacked for simply trying to investigate this?" Join the Daily Record WhatsApp community! Get the latest news sent straight to your messages by joining our WhatsApp community today. You'll receive daily updates on breaking news as well as the top headlines across Scotland. No one will be able to see who is signed up and no one can send messages except the Daily Record team. All you have to do is click here if you're on mobile, select 'Join Community' and you're in! If you're on a desktop, simply scan the QR code above with your phone and click 'Join Community'. We also treat our community members to special offers, promotions, and adverts from us and our partners. If you don't like our community, you can check out any time you like. To leave our community click on the name at the top of your screen and choose 'exit group'. If you're curious, you can read our Privacy Notice. During the tribunal, bosses were asked to clarify the data breach they claimed to have occurred, however, neither could provide a clear explanation. Instead, one of them argued Sophie viewing the CCTV was "like it were a screening of a night out at the club.' It further heard that in the months before Sophie was dismissed, several members of the club's committee had been hostile to her after she made a grievance about a committee member. However, Judge E Mannion ruled that although she had been subjected to a level of enmity after making the report, it was not the sole reason why she was sacked. Handing down her findings, Judge Mannion ruled that viewing the CCTV would, at most, be "a breach of an internal policy." She went on to describe the case as a "wholescale" breach of the Acas Code of Practice - a set of fairness standards that employers are required to follow. She said: "While it is appreciated that the respondent is a small organisation, I cannot think of a more egregious breach of the Acas Code of Practice than this case. "It was a total and wholescale breach. Every aspect of it was departed from." Sophie said: "Losing my job right before Christmas plunged my life into a period of stress and uncertainty. "I already suffer from anxiety so this had a profound impact on my mental health. "Taking this to tribunal wasn't about money but rather to clear my name. " Saltcoats is a small town so it is crucial people know the truth. "Even after the truth coming to light and them being proved to have wrongfully dismissed me, I still haven't received even an apology from any of the people that have put me through this."


Irish Times
a day ago
- Business
- Irish Times
TikTok seeks stay on suspension of data transfer to China decision
TikTok is to ask the High Court to halt a suspension of data transfers to China within six months under a decision made in early May by the Data Protection Commission (DPC). On May 2nd, the DPC announced it had made a final decision in its inquiry into the lawfulness of transfers by TikTok Technology Ltd' of personal data of users of the TikTok platform to the People's Republic of China from countries in the European Economic Area (EEA) which includes all the EU along with Iceland, Liechtenstein, and Norway. DPC commissioners Dr Des Hogan and Dale Sunderland found that TikTok infringed the GDPR regarding its transfers and regarding its transparency requirements. The DPC imposed fines totalling €530 million and ordered TikTok to bring its processing into compliance within six months, including suspending the transfers to China if this was not done within that time frame. READ MORE On Thursday, Emily Egan McGrath SC, for TikTok, told Mr Justice Mark Sanfey her client was seeking that the case be admitted to the fast track Commercial Court as it was an urgent matter. She said the damage the decision will cause to her client 'was very significant' and they were looking for an order putting a stay on the suspension of data transfer decision. Kelley Smith SC, for the DPC, said there was a significant volume of papers in the case and her side had not had a chance to look at the documents. However, she did not imagine there would be any objection to the application to enter the case into the commercial list. Mr Justice Sanfey said he thought there might not be opposition to the admission to the commercial list but it may be that the DPC will take a different tack. He said there were difficulties in relation to fixing a hearing over the stay on the suspension decision in terms of judges being tied up in other cases in the coming weeks but he would hear the application to admit the case to the Commercial Court next week.


BreakingNews.ie
a day ago
- Business
- BreakingNews.ie
TikTok asks court to halt suspension of data transfers to China
TikTok is to ask the High Court to halt a suspension of data transfers to China within six months under a decision made in early May by the Data Protection Commission (DPC). On May 2nd, the DPC announced it had made a final decision into its inquiry into the lawfulness of transfers by TikTok Technology Ltd of personal data of users of the TikTok platform to the People's Republic of China from countries in the European Economic Area (EEA), which includes all the EU along with Iceland, Liechtenstein, and Norway. Advertisement DPC commissioners Dr Des Hogan and Dale Sunderland found that TikTok infringed the GDPR regarding its transfers and regarding its transparency requirements. The DPC imposed fines totalling €530 million and ordered Tik Tok to bring its processing into compliance within six months, including suspending the transfers to China if this was not done within that timeframe. On Thursday, Emily Egan McGrath SC, for TikTok, told Mr Justice Mark Sanfey her client was seeking that the case be admitted to the fast track Commercial Court as it was an urgent matter. She said the damage the decision will cause to her client "was very significant" and they were looking for an order putting a stay on the suspension of data transfer decision. Advertisement Ireland Dog owner could face jail after 'scared, malnouris... Read More Kelley Smith SC, for the DPC, said there was a significant volume of papers in the case and her side had not had a chance to look at the documents. However, she did not imagine there would be any objection to the application to enter the case into the commercial list. Mr Justice Sanfey said he thought there might not be opposition to the admission to the commercial list but it may be the DPC will take a different tack. He said there were difficulties in relation to fixing a hearing over the stay on the suspension decision in terms of judges being tied up in other cases in coming weeks but he would hear the application to admit the case to the Commercial Court next week.


Forbes
a day ago
- Business
- Forbes
Securing The Future: How Big Data Can Solve The Data Privacy Paradox
Shinoy Vengaramkode Bhaskaran, Senior Big Data Engineering Manager, Zoom Communications Inc. As businesses continue to harness Big Data to drive innovation, customer engagement and operational efficiency, they increasingly find themselves walking a tightrope between data utility and user privacy. With regulations such as GDPR, CCPA and HIPAA tightening the screws on compliance, protecting sensitive data has never been more crucial. Yet, Big Data—often perceived as a security risk—may actually be the most powerful tool we have to solve the data privacy paradox. Modern enterprises are drowning in data. From IoT sensors and smart devices to social media streams and transactional logs, the information influx is relentless. The '3 Vs' of Big Data—volume, velocity and variety—underscore its complexity, but another 'V' is increasingly crucial: vulnerability. The cost of cyber breaches, data leaks and unauthorized access events is rising in tandem with the growth of data pipelines. High-profile failures, as we've seen at Equifax, have shown that privacy isn't just a compliance issue; it's a boardroom-level risk. Teams can wield the same technologies used to gather and process petabytes of consumer behavior to protect that information. Big Data engineering, when approached strategically, becomes a core enabler of robust data privacy and security. Here's how: Big Data architectures allow for precise access management at scale. By implementing RBAC at the data layer, enterprises can ensure that only authorized personnel access sensitive information. Technologies such as Apache Ranger or AWS IAM integrate seamlessly with Hadoop, Spark and cloud-native platforms to enforce fine-grained access control. This is not just a technical best practice; it's a regulatory mandate. GDPR's data minimization principle demands access restrictions that Big Data can operationalize effectively. Distributed data systems, by design, traverse multiple nodes and platforms. Without encryption in transit and at rest, they become ripe targets. Big Data platforms like Hadoop and Apache Kafka now support built-in encryption mechanisms. Moreover, data tokenization or de-identification allows sensitive information (like PII or health records) to be replaced with non-sensitive surrogates, reducing risk without compromising analytics. As outlined in my book, Hands-On Big Data Engineering, combining encryption with identity-aware proxies is critical for protecting data integrity in real-time ingestion and stream processing pipelines. You can't protect what you can't track. Metadata management tools integrated into Big Data ecosystems provide data lineage tracing, enabling organizations to know precisely where data originates, how it's transformed and who has accessed it. This visibility not only helps in audits but also strengthens anomaly detection. With AI-infused lineage tracking, teams can identify deviations in data flow indicative of malicious activity or unintentional exposure. Machine learning and real-time data processing frameworks like Apache Flink or Spark Streaming are useful not only for business intelligence but also for security analytics. These tools can detect unusual access patterns, fraud attempts, or insider threats with millisecond latency. For instance, a global bank implementing real-time fraud detection used Big Data to correlate millions of transaction streams, identifying anomalies faster than traditional rule-based systems could react. Compliance frameworks are ever-evolving. Big Data platforms now include built-in auditability, enabling automatic checks against regulatory policies. Continuous Integration and Continuous Delivery (CI/CD) for data pipelines allows for integrated validation layers that ensure data usage complies with privacy laws from ingestion to archival. Apache Airflow, for example, can orchestrate data workflows while embedding compliance checks as part of the DAGs (Directed Acyclic Graphs) used in pipeline scheduling. Moving data to centralized systems can increase exposure in sectors like healthcare and finance. Edge analytics, supported by Big Data frameworks, enables processing at the source. Companies can train AI models on-device with federated learning, keeping sensitive data decentralized and secure. This architecture minimizes data movement, lowers breach risk and aligns with the privacy-by-design principles found in most global data regulations. While Big Data engineering offers formidable tools to fortify security, we cannot ignore the ethical dimension. Bias in AI algorithms, lack of transparency in automated decisions and opaque data brokerage practices all risk undermining trust. Thankfully, Big Data doesn't have to be a liability to privacy and security. In fact, with the right architectural frameworks, governance models and cultural mindset, it can become your organization's strongest defense. Are you using Big Data to shield your future, or expose it? As we continue to innovate in an age of AI-powered insights and decentralized systems, let's not forget that data privacy is more than just protection; it's a promise to the people we serve. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?