
TikTok did not breach employment law by refusing to allow worker injured in car crash to work remotely
The refusal of TikTok to allow an employee who had suffered serious injuries in a car crash to work full time from home was not in breach of employment legislation, the Workplace Relations Commission has ruled.
The WRC found Tik Tok Technology had not breached the Work Life Balance and Miscellaneous Provisions Act 2023 in its decision not to allow Zaurbek Musaev to work from home.
Mr Musaev, an operations specialist with TikTok on a monthly salary of €3,556, claimed the company had not treated his request for remote working made in March 2024 in a lawful manner.
He told the WRC he had applied for full-time remote working largely due to a very serious road traffic collision in late 2020.
Mr Musaev lives in Co Monaghan and commutes to Dublin either by car or public transport.
He gave evidence he found travelling very stressful and it triggered very traumatic recollections of the crash in which he was involved.
He claimed the quality of his work would not be impacted by remote working and would in fact improve and be more beneficial to TikTok.
A legal representative of the company told the WRC its Dublin office was the complainant's place of work.
The solicitor said flexibility regarding remote working could be allowed but subject to the overall acceptance that the Dublin office was the place of work identified in Mr Musaev's employment contract.
TikTok claimed it had abided by both the letter and spirit of the Work Life Balance and Miscellaneous Provisions Act 2023.
It acknowledged Mr Musaev's circumstances were complex but they were fully examined in keeping with all quite extensive, internal procedures.
The company said it had discharged all its legal obligations required under the legislation and had no case to answer.
WRC adjudication officer Michael McEntee ruled both oral and written evidence had shown considerable engagement between the parties, which met the requirements of the legislation.
Mr McEntee added: 'The extensive case law was also demonstrative that once proper consideration is given to a request, the statutory obligations of a respondent are met.'
He also noted considerable efforts were being made to resolve the many underlying complexities of the case amicably.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Irish Times
2 hours ago
- Irish Times
Scientist with ‘debilitating' condition alleges discrimination by not being allowed work from home
A scientist living with 'debilitating' endometriosis has accused international medical devices firm Abbott of discriminating against her by refusing to let her work from home to ease a daily commute of nearly four hours. The worker, who has over a decade of industry experience and advanced postgraduate qualifications, told the Workplace Relations Commission (WRC) on Tuesday she was given ten minutes to pick up her things and get off an Abbott site last autumn after being told she failed her probation. She said she was reduced to 'crying all the way back' in a colleague's arms during the drive home. Abbott Ireland Ltd is denying complaints of disability discrimination and discriminatory dismissal under the Employment Equality Act 1998 by the worker, Ms X, who has been afforded anonymity by the WRC. READ MORE Ms X was hired by Abbott in spring 2024 and spent just short of six months working in an office at an Abbott site in a county town analysing test data, but was deemed to have failed her probation and was let go that autumn, the tribunal heard. The company's representative, Fiona Egan of the Irish Business and Employers' Confederation (Ibec), submitted that Ms X failed her probation for 'conduct and performance' following a number of instances of lateness and uncertified absences from work. It had 'nothing to do with her condition', she said. Shaun Boylan BL, appearing for Ms X instructed by Sean Ormonde & Co Solicitors, said the firm's policy of refusing to allow probationary employees to work from home was 'implicitly discriminatory' against his client, as it put accommodations for her disability 'on hold for six months'. Ms X said she had been diagnosed with stage two endometriosis in 2022, a condition affecting the female reproductive organs. She said it causes her 'debilitating pain', accompanied by 'nausea, fainting and dizziness' which was at its worst during the 7-8 days of her period and required prescription-only codeine and opiate painkillers to manage. The daily drive from Ms X's home to the Abbott site was 'coming up on a four-hour commute to work every day', adjudication officer Brian Dolan remarked during the hearing on Tuesday. Ms X said she thought at first she should 'just endure' the commute. Her evidence was that she was 'promised in the interview that it would be a hybrid role' and that she had turned down roles elsewhere with a five-day-a-week on-site commitment because of that. Ms X said her immediate team consisted of six or seven colleagues, but 'nobody' was in office five days a week, and 'most' were working from home. On a date six weeks after she started, a 'very sudden' departmentwide meeting was called, at which she said a senior manager declared 'there would be no more work from home possible' at the site, barring the 'possibility' of one day a week. Ms X said she was in 'excruciating pain' that day and found the message 'difficult to take'. She told the commission she went straight to her team leader 'in visible distress, with tears in my eyes' and proceeded to tell him she had endometriosis. She said her team leader was 'empathetic' and 'supportive' at that stage and gave her an assurance that a 'one week in, one week out' work from home arrangement would be possible – but only once her probation was finished. Making it in for an on-site team meeting at 9.30am meant setting out from home at 6.30am or 6.45am, Ms X said. Over seven weeks before a probationary caution letter being issued in her final weeks on the job, Ms X was late eight times and absent without a medical cert on three occasions, leading to an occupational health referral. She said a senior manager later told her: 'The company cannot offer more than one day work from home past your probation,' and urged her to find somewhere to live closer to the site. The probation review continued into the following month, and concluded when her team leader called her into a conference room and read out a letter stating that her employment was being terminated, she said. The only reason stated was: 'Your standard of performance has not met company expectations.' 'I asked why. [My team leader] said: 'Everything is in the letter,'' Ms X said. After saying she could not continue with the meeting a company HR officer told her an outstanding pay matter could be handled by email and gave her 'ten minutes to leave, to grab [my] stuff and leave the premises'. 'Everyone was crying in the car,' she said. 'I hugged my colleague, in the car, and kept crying all the way back to Dublin,' she added. The adjudicator, Mr Dolan, told Ms X he had the power to order her reinstatement as a remedy under the equality legislation if she was successful in the case. Ms X replied: 'I'd probably never come back to Abbott.' The case has been adjourned to a later date, when three company witnesses, including Ms X's team leader and the senior manager are due to give evidence.


Irish Independent
2 hours ago
- Irish Independent
Staff at Bord na Móna bin collection service to go on strike this month over planned sale
A Siptu spokesperson said its members have rejected a proposal brokered at the Workplace Relations Commission to resolve the dispute. The industrial action will begin with a one-day strike on Thursday June 19 and will ramp up to two days of stoppages the following week, and three days the week after that. Bord na Móna Recycling provides wheelie bin and waste collection services to up to 135,000 customers in Leinster and Munster. It has customers in Carlow, Kildare, Laois, Limerick, Meath, Offaly, Roscommon, Tipperary, Westmeath, Wexford and Wicklow. Bord na Móna has announced plans to sell the company to Kerry-based KWD Recycling. Siptu officials said it is the country's last publicly owned domestic waste-collection service and warned prices could rise if the number of operators shrinks. Siptu divisional organiser Adrian Kane said the workers' strike committee met last night and unanimously agreed to reinstate a notice of strike and industrial action. He said notice was served on the company today 'and our members are determined to see this through'. 'Our members don't want to be in this position but feel they have no choice,' he said. He said the primary reason they rejected the proposal was due to an alleged failure of the company to agree a Registered Employment Agreement. ADVERTISEMENT Mr Kane said this would guarantee the workers' terms and conditions were protected in the event of a sale. 'The fire sale of Bord na Móna Recycling is bad for workers, customers, and the environment,' he said. 'It makes no economic or environmental sense. 'Why privatise the last remaining publicly owned domestic waste provider?'


RTÉ News
3 hours ago
- RTÉ News
All you need to know about voice spoofing and audio deepfakes
Analysis: Biometric fraud like voice spoofing and audio deepfakes are part of broader social engineering attacks by scammers and criminals Voice spoofing involves using digital technology such as artificial intelligence to mimic someone's voice so accurately that it can deceive both humans and automated speaker verification systems. With recent rapid advancements in AI, creating these fake voices—often called "audio deepfakes"—has become alarmingly easy. Today, with just a few seconds of recorded speech from platforms like podcasts, YouTube or TikTok, machine learning models can generate highly realistic synthetic voices that mimic real individuals. It is a type of biometric fraud and often part of broader social engineering attacks. How does voice spoofing work? AI-powered tools analyses the unique patterns of a person's speech—such as tone, pitch, and rhythm—and use this data to produce synthetic speech that closely resembles the original voice. The technology have become so advanced that distinguishing between a real voice and a fake one is increasingly challenging. From RTÉ Radio 1's The Business, BBC's File on 4. reporter Paul Connolly on how criminals are now using AI-generated voices to scam people out of their money Typically, the process usually begins with an attacker collecting voice clips from online sources like social media or videos. Specialized AI models, like VGGish or YAMNet analyze these voice samples to extract important acoustic patterns from the voice, turning them into digital fingerprints called embeddings. These embeddings are then fed into voice generation systems such as Tacotron, WaveNet, or FastSpeech that produce new speech mimicking the original voice. The resulting fake voice can be used in phone calls or apps to impersonate someone in real time. How is this going to impact us in the real world? Financial scams are a growing problem and we've all had a (fairly ridiculous) phone call where a robot voice purporting to be from a company tries to get information or money, but more sophisticated versions have worked. In the UK, fraudsters used AI-generated voices to impersonate financial advisors, leading to a multi-million euro scam targeting crypto investors. In the US, the FBI has warned about scammers using AI to mimic senior US officials' voices, deceiving individuals into sharing confidential information. There have also even been cases where scammers cloned the voices of loved ones, calling individuals and pretending to be in distress to extract money. These incidents highlight the disturbing reality that even the sound of someone's voice can no longer be trusted. From CNN, can Donie O'Sullivan's parents tell the difference between RealDonie's voice and AI-Donie's voice? Celebrities, politicians, and influencers are particularly at risk because their voices are widely available online. The more audio content (voice data) available publicly, the easier it is for AI tools to replicate their voice. This is a basic principle of AI: more data = better performance. However, it's not just public figures who are at risk. If you've ever posted a video or audio clip on platforms like Facebook, Instagram, or YouTube, your voice could potentially be cloned. What are the difficulties in detecting voice spoofing? Detecting synthetic voices is a complex task. Most traditional security systems and speaker verification systems often rely on voice recognition for authentication, but AI-generated voices have become sophisticated enough to deceive these systems. Some of the core technical challenges include: Spectro-temporal similarity Fake voices closely mimic both pitch and timing patterns of natural speech. Data imbalance: Real-world datasets typically contain fewer examples of spoofed voices, making it harder for AI to recognize these cases. Generalisation: Many detection models struggle when faced with spoofing methods they weren't specifically trained on. How to protect yourself While the threat is real, there are steps you can take to safeguard against voice spoofing: Be sceptical: If you receive an unexpected call requesting sensitive information or money, verify the caller's identity through another channel. Use safe words: Establish a unique code word with family and close contacts that can be used to confirm identities during emergencies. Limit voice sharing: Be cautious about sharing voice recordings online, especially on public platforms. Stay informed: Keep abreast of the latest scams and educate yourself on how to recognize potential threats. Voice spoofing poses a growing threat as AI continues to advance, making it easier than ever to mimic someone's voice convincingly. Whether you're a public figure or an everyday social media user, the potential to become a victim of voice cloning exists. From RTÉ Radio 1's Ray D'Arcy Show, AI expert Henry Ajder talks on how deepfakes are damaging online trust and what some platforms are doing to rebuild it How our research work is helping Our recent research proposes an innovative and effective approach for detecting voice spoofing by using a hybrid deep learning (DL) architecture called VGGish-LSTM. We used VGGish, a pre-trained model developed by Google, to extract robust acoustic embeddings from audio data. These embeddings capture detailed features that are often not noticeable by human listeners but are critical in distinguishing synthetic voices. Once extracted, these acoustic features are then analysed by a Long Short-Term Memory (LSTM) network, a type of artificial neural network designed specifically to detect long-term patterns and dependencies in sequential data. These networks excel at identifying variations in speech rhythm, tone, and pitch that could indicate synthetic or manipulated speech. The advice for users is to stay vigilant, limit how much voice data you share online and adopt simple safety practices Evaluated on the widely used ASV Spoof 2019 dataset, our model achieved an outstanding accuracy of over 90%. This performance demonstrates our model's ability to detect spoofing effectively and can be used in real-world scenarios such as banking authentication, call centre security, or smart home voice verification systems. With ongoing research into detection technologies, such as the VGGish-LSTM model described here, we can continue developing robust defences to cope with voice spoofing scams. But for users, the advice is to stay vigilant, limit how much voice data you share online and adopt simple safety practices.