Former German referee expects more AI in officiating, even on fouls
Referee Felix Brych smiles during an interview with the German Press Agency (dpa) at the launch of his book "Aus kurzer Distanz" (from a short distance). Artificial Intelligence (AI) will have a growing influence in the officiating football matches, including the detection of fouls, former Bundesliga referee Felix Brych believes. Peter Kneffel/dpa
Artificial Intelligence (AI) will have a growing influence in the officiating football matches, including the detection of fouls, former Bundesliga referee Felix Brych believes.
"AI calculates if someone is offside. There are even the first tests on how to calculate physical contact - was it really enough for a player to fall?" Brych told weekly Die Zeit.
Advertisement
Brych, 49, ended his career spanning two decades last month after a record 359 games in the Bundesliga and 69 in the Champions League.
He said it was important for him to rely on his own perception and not just on the increasingly integrated technical aids.
"If someone falls down without being hit, his team-mates immediately look at me. They want to know: What's the referee doing now?" Brych said.
"If, on the other hand, someone is fouled properly, the team-mates first want to know how badly their team-mate has been hit. They then react a fraction later."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Digital Trends
16 minutes ago
- Digital Trends
Your Meta AI chats are not really a secret. Here's how to keep them private
At this point, it shouldn't come as a surprise that discussing your deepest secrets and personal issues with an AI chatbot is not a good idea. And when that chatbot is made by Meta, the company behind Facebook and Instagram (including all its sordid history with user data privacy), there is even more reason to be cautious. But it seems a lot of users are oblivious to the risks, and in return, exposing themselves in the worst possible ways. Your chatbot interactions with Meta AI — from seeking trip suggestions to jazzing up an image — are publicly visible in the app's endlessly-scrolling vertical Discover feed. I installed the app a day ago, and in less than 10 minutes of using it, I had already come across people sharing their entire resume, complete with their address, phone number, qualifications, and more, on the main feed page. Some had asked the Meta AI chatbot to give them trip ideas in Bangkok involving strip clips, while others had weirdly specific demands regarding a certain skin condition. Users on social media have also documented the utterly chaotic nature of their app's Discover feed. An expert at the Electronic Privacy Information Center told WIRED that people are sharing everything from medical history to court details. Recommended Videos How to plug Meta AI app's privacy holes? Of course, an app that doesn't offer granular controls and a more explicit setup flow regarding chat privacy is a disaster waiting to happen. The Meta AI app clearly fumbled on this front and puts the onus of course correction on users. If you have the app installed on your phone, follow these steps on your phone. Open the Meta app and tap on the round profile icon in the top-right corner of the app to open the Settings dashboard. On the Settings page, tap on Data & Privacy, followed by Manage your information on the next page. You will now see an option that says 'Make all public prompts visible to only you.' Tap on it and select 'Apply to all' in the pop-up window, as shown in the image below. If you are concerned about previous AI chats that have contained sensitive information, you can clear the past log by tapping on the 'Delete all prompts' option on the same page. Next, go back to the Data & Privacy section and click on the 'Suggesting your prompts on other apps' option. On the next page, disable the toggles corresponding to Instagram and Facebook. If you have already shared your Meta interactions publicly, click on the notepad icon in the bottom tray to check the entire history. On the chat record page, tap on any of the past interactions to open it, and then tap on the three-dot menu button in the top-right corner. Doing so opens a pop-up tray where you see options to either delete the chat or make it private so that no other users of the Meta AI app can see it in their Discover feed. As a general rule of thumb, don't discuss personal or identifiable information with the chatbot, and also avoid sharing pictures to give them creative edits. Why is it deeply problematic? When the Meta AI app was introduced in April, the company said its Discover feed was 'a place to share and explore how others are using AI.' Right now, it's brimming with all kinds of weird requests. A healthy few of them appear to be fixated on finding free dating and fun activity ideas, some are about career and relationship woes, finding love in foreign lands, and skin issues in intimate parts of the body. Facebook's 'Meta AI' literally just puts everyone's private conversations directly on a public For You page what the actual fuck lol — Daniel (@danielgothits) June 12, 2025 Here is the worst part. The only meaningful warning appears when you are about to post your AI creation (or interaction) to the feed section. The pop-up message says 'Feed is public' at the top, and underneath that, you see the 'post to feed' button. According to Business Insider, that warning was not always visible and was only added after the public outcry. But it appears that a lot of people are not aware of what the 'Post to Feed' button actually does. To them, it might come out as something referring to their own feed where the Meta AI chats are catalogued in an orderly fashion for their eyes, just the way you will see them in other chatbot apps such as ChatGPT and Gemini. Another risk is the exposure. During the initial setup, when the app picks up account information from the Facebook and/or Instagram app installed on your phone, the text boxes are dynamic, which means you can go ahead and change the username. Notably, there is no 'edit' or 'change' signal, and to an ordinary person, it would just appear as if the Meta AI simply extracted the correct username from their pre-installed social app. It's not too different from the seamless sign-up experience in apps that show users Google Account or Apple ID options to log-in. Wild things are happening on Meta's AI app. The feed is almost entirely boomers who seem to have no idea their conversations with the chatbot are posted publicly. They get pretty personal (see second pic, which I anonymized). — Justine Moore (@venturetwins) June 11, 2025 When I first installed the app on my iPhone 16 Pro, it automatically identified the Instagram account logged into the phone. I tapped on the button with my username plastered over it, and I was directly taken to the main page of the Meta AI app, where I could directly jump into the Discover feed. There was no warning about the privacy, or how the log of my data would be shared, or even made public knowledge. If you want your AI prompts not to appear in the public Discover feed, you will have to manually enable an option from within the app's settings section, as described above. The flow is slightly different on Android, where you see a small 'chats will be public' during the initial set-up process. That message appears only once, and not on any other page. Just like the iOS app, you must manually enable the option to prevent your chats from appearing in the Discover feed and to stop the chat prompts from appearing inside Instagram and Facebook. If you absolutely must use the Meta AI, you can already summon it in WhatsApp, Instagram, and Facebook. In those apps, you can ask Meta AI random questions, ask it to create images, or give it a fun makeover to pictures, among others. Be warned, however, that AI still struggles with hallucination issues, and you must double-check whatever information the chatbot serves you.


Android Authority
17 minutes ago
- Android Authority
Google Photos' upcoming Remix feature could launch with a video upgrade (APK teardown)
Edgar Cervantes / Android Authority TL;DR Google Photos' upcoming Remix feature, internally codenamed 'Bluejay,' will likely support video editing. When it launches, the feature will use generative AI to transform videos into various styles, with options potentially including 'I'm feeling lucky,' 'Subtle movements,' and 'Go wild.' Creating AI photos and videos is all the rage these days, but many people are using AI to remix their existing photos and videos. For instance, the Studio Ghibli trend went viral recently, and people had a lot of fun reimaging themselves in the popular art style. We've previously spotted Google Photos working to incorporate this generative AI use case with the upcoming Remix feature. While the feature is yet to launch, we've now spotted clues indicating it will work for both photos and videos. Authority Insights story on Android Authority. Discover You're reading anstory on Android Authority. Discover Authority Insights for more exclusive reports, app teardowns, leaks, and in-depth tech coverage you won't find anywhere else. An APK teardown helps predict features that may arrive on a service in the future based on work-in-progress code. However, it is possible that such predicted features may not make it to a public release. Google Photos v7.34 includes code that indicates that the upcoming Remix feature could also support video edits. Code Copy Text Remix Bluejay Video I'm Feeling Lucky. Suble Movements. Go Wild. Here, 'bluejay' is the working codename for the Remix feature, and in the second string, it is used as a placeholder for the marketing name 'Remix' (which is still a work-in-progress name and may or may not be the final marketing name). While the Remix feature for photos could offer styles like claymation and anime, there would be different style suggestions provided for the Remix video feature. We could spot these three style suggestions: Random style: I am feeling lucky Subtle style: Subtle Movements Wild style: Go Wild As is the theme with the Remix feature, the Remix video feature will also likely use generative AI to transform your video into these different styles. Google has yet to announce the Remix feature, and it's still very much a work in progress. We don't know if and when it will roll out to users. We'll keep you updated when we learn more. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.


Forbes
31 minutes ago
- Forbes
AI In Education: Why Teachers Need Tools, Time And Training
A majority of teachers—60%—now report integrating AI into their lessons, a significant jump from just 40% the previous year, according to Education Week. Yet, despite this rapid adoption, 58% of K-12 teachers still lack formal AI training nearly two years after the introduction of ChatGPT. This disconnect reveals the reality of AI implementation in American schools: teachers are embracing the technology faster than institutions can support them. As someone who has spent over 15 years working with young people through WIT (Whatever It Takes), I've observed this transformation firsthand. Our organization works with teen entrepreneurs who use AI daily for business planning, content creation, and problem-solving. The students arriving in classrooms today expect their teachers to understand and guide their use of AI, but many educators are learning these tools on their own time. The data shows educators are finding practical applications across multiple areas. According to K-12 Dive research, teachers who use AI most commonly apply it for: Chatbots like ChatGPT are used weekly by 53% of educators, with English language arts and social studies teachers in middle and high schools showing the highest integration rates. These applications demonstrate that teachers understand the potential of AI. They're using it to enhance their existing strengths, including personalizing learning, creating more effective assessments, and developing grade-appropriate materials. At WIT, we developed WITY, our custom AI assistant that helps teen entrepreneurs refine business pitches and conduct market research. Through this work, we've learned that successful AI integration requires both the right tools and proper training on how to use them effectively. Based on these insights, we now partner with schools and teachers to help them develop effective AI strategies that work in real-world classrooms. Our experience building AI tools for young entrepreneurs has taught us what educators need: not just access to technology but frameworks for using it purposefully. The statistics reveal the scope of support needed. According to EdWeek research, only 43% of educators have participated in at least one AI training session, up from 29% in 2024. Teachers cite several barriers to getting the training they want: Nearly half of teachers haven't explored AI tools because of more pressing responsibilities. In contrast, others report requesting district policies for student AI use only to encounter indifference or unclear direction from administrators. Some teachers are so frustrated by the lack of support that they're considering leaving the profession. Teachers require (and deserve) time for hands-on experience with AI tools, opportunities for collaboration with colleagues, and ongoing support as they experiment with new approaches. Successful training programs typically include: Practical exploration time. Teachers need dedicated hours to experiment with AI tools, not quick add-ons to existing professional development sessions. Peer collaboration. Educators learn effectively from colleagues who share similar challenges and student populations. Ongoing support. AI capabilities evolve rapidly, requiring continuous learning rather than just a one-time workshop. Clear guidelines. Teachers need frameworks for distinguishing between appropriate AI use and academic integrity violations. Educators wonder whether AI shortcuts could weaken students' creative problem-solving skills or reduce their ability to tolerate challenging work. Some have noticed students becoming overly dependent on AI for tasks they should master independently. Training programs are most effective when they acknowledge and build upon real classroom experiences. Teachers benefit from exploring AI's strengths and weaknesses together, developing strategies that preserve rigorous learning standards. Innovative educators are already modifying their approaches. They're asking more questions verbally, designing collaborative projects that require original thinking, and creating assessments that reveal authentic understanding. These innovations show how teachers can maintain academic integrity while preparing students for an AI-integrated world. The most successful AI implementations provide teachers with tools specifically designed for educational use rather than general-purpose AI platforms. Educational AI tools typically offer: Curriculum alignment. Tools that connect to state standards and learning objectives make integration more straightforward. Student safety features. Educational AI platforms include content filters and privacy protections that general tools may lack. Assessment capabilities. AI tools designed for education often include features that track student progress and help identify learning gaps. Collaboration features. Tools that support both individual and group work align with the realities of the classroom. At WIT, we've found that custom AI solutions often work better than off-the-shelf options because they can be designed around specific educational goals and the needs of individual students. Young people often adapt to new technologies quickly, making them valuable partners in the integration of AI. Students can help teachers understand how AI tools work while teachers provide essential guidance on ethical use and critical evaluation of AI outputs. This collaborative approach benefits everyone. Students learn to use AI responsibly while teachers gain technical insights. The partnership model creates mutual respect and shared ownership of the learning process. The teen entrepreneurs in our WIT programs don't see AI as threatening or mysterious. They view it as a powerful assistant that amplifies their creativity and problem-solving abilities. Their teachers should feel the same way. Schools can build on the foundation established by early adopters. 60% of teachers who are already integrating AI demonstrate that educators are ready to embrace these tools when they receive appropriate support. Successful implementation requires: Investment in training time. Meaningful AI literacy development requires dedicated professional development hours, rather than brief overviews. Access to appropriate tools. Teachers require AI platforms specifically designed for educational use, equipped with robust safety and privacy features. Clear policies. Guidelines that distinguish between AI as a learning tool and AI as a substitute for learning are beneficial to both teachers and students. Ongoing support. AI capabilities change rapidly, requiring continuous learning opportunities rather than one-time training sessions. Over the past year, as I've helped schools with AI adoption, I've witnessed schools struggle with their AI policies and integration. The institutions that succeed share one trait: they invest seriously in their teachers first. The schools winning with AI aren't just buying software—they're creating time for teachers to learn, experiment, and share what works. Early adopters have proven that this approach delivers results, and more teachers are interested. Teachers can't master AI tools during lunch breaks or after exhausting school days. They need protected time, practical training, and permission to try new approaches without penalty.