
Using AI Correctly May Take Some Thinking
Digital generated image of young woman standing on red background and putting hand into multicolored ... More glowing portal door. Metaverse concept.
To many people with an inside view of the industry. It's the details of our work with LLMs that matter. If you're sufficiently involved that you already understand the major applications and use cases, then you're likely to be thinking about exactly how this will work in any given business.
A CEO is in the hot seat in this way. He or she has a certain responsibility to know what the trends are, what the big technologies are, and how to deploy them in a general sense. He or she also has the responsibility to apply them to the exact business processes that leadership is in charge of.
Some of this involves understanding how people work with processes and products.
I hear a lot of people use those 'three P's': people, processes and products, to talk about the overall environment in which we're figuring out how to use AI.
Here's an interesting analogy: think about it this way – buildings have doors. People need to go in and out.
So sometimes the door is made for humans to push on it to get access. Other times the door is made for people to pull on, in order to get in or out of the building. Then there are revolving doors.
The commonality is that people need to know how to use those doors effectively. If you're pushing on a pull door, you're not going to get anywhere. And vice versa.
I heard a lot about this kind of challenge in a panel at Imagination in Action this spring.
Panelists talked about various challenges, and Kevin Chung went first with ideas on employee alignment.
'Employees are sabotaging AI strategy,' he said, estimating that a full 40% of employees at a given workplace are 'not aligned' with what leadership wants to do. He also stressed the importance of AI literacy, which faces challenges, in light of the fact that these technologies are only a few years old.
Panelist Hira Dangol talked about business alignment and ROI.
'I think that governance is one of the key enablers, and also a limiting factor when it comes to how you are going to achieve success at the transformation of your AI journey,' he said. 'And then if you look at the value chain structure from the bottom up for the enterprise, it's pretty much like a buy versus build model.'
Noting the utility of events like hackathons, Olga Beregovaya described a set of analytical processes for vetting AI.
'It works great at the lab, but how do you actually take it to enterprise level?' she said. 'How do you make sure it's scalable? How do you mitigate technical constraints like, say, latency, when you need to deliver content instantly? So it's basically how do you manage supply chain and global content delivery chain, end to end, making sure that AI has a role where it really does have a role.'
She also talked about human translators being involved with AI's treatment of language. So much of the technology, she pointed out, is English-driven, and other languages can pose a challenge. In some ways, it's amazing how AI adapts. In other ways, there are still gaps and hurdles to get over.
'Now, when AI … works great despite those 20% of hallucinations and other flaws, what do translators actually do?' she said. 'And how do they reinvent themselves? And that touches upon the ethics, right? You want to make sure that people have jobs, and you want to make sure that workforce emotional safety is at play there.'
At the end, each contributor shared their optimism and pessimism over AI.
Chung balanced a lot of interesting applications like self-driving vehicles with the idea that great power comes with great responsibility.
Dangol talked about security, privacy and risk management standards.
'It works,' Beregovaya said, again going back to cross-language capacity. However, she added, AI is not a 'silver bullet,' and people have fears around adoption that have to be dealt with.
All of this provides us with some good feedback for figuring out how to integrate AI to business in the right way. I've always said that technology can help or hinder a team based on whether it's the right fit or not. Let's make sure we have it right before letting these technologies transform enterprise.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time Business News
7 minutes ago
- Time Business News
Detector de IA Understanding the Technology Behind Identifying AI-Generated Content
To address these challenges, Detector de IA has been developed—specialized tools designed to determine if content was created by a human or generated by artificial intelligence. This article explores how AI detectors work, their applications, limitations, and the future of this important technology. An Detector de IA is a tool or algorithm developed to examine digital content and assess whether it was produced by a human or generated by an artificial intelligence system. These detectors are capable of analyzing text, images, audio, and video to detect patterns commonly associated with AI-generated content. AI detectors are being widely adopted across multiple sectors such as education, journalism, academic research, and social media content moderation. As AI-generated content continues to grow in both volume and complexity, the need for accurate and dependable detection methods has increased dramatically. Detector de IA rely on a combination of computational techniques and linguistic analysis to assess the likelihood that content was generated by an AI. Here are some of the most common methods: Perplexity measures the predictability of a text, indicating how likely a sequence of words is based on language patterns. AI-generated text tends to be more predictable and coherent than human writing, often lacking the spontaneity and errors of natural human language. Lower perplexity scores typically suggest a greater chance that the text was generated by an AI system. AI writing often exhibits specific stylistic patterns, such as overly formal language, repetitive phrasing, or perfectly structured grammar. Detectors look for these patterns to determine authorship. Certain detectors rely on supervised learning models that have been trained on extensive datasets containing both human- and AI-generated content. These models learn the subtle distinctions between the two and can assign a probability score indicating whether a given text was AI-generated. Newer methods include embedding hidden watermarks into AI-generated content, which can be identified by compatible detection tools. In some cases, detectors also analyze file metadata for clues about how and when content was created. Several platforms and tools have emerged to help users detect AI-generated content. Some of the most well-known include: GPTZero : One of the first widely adopted detectors designed to identify content generated by large language models like ChatGPT. : One of the first widely adopted detectors designed to identify content generated by large language models like ChatGPT. : Popular in academic and publishing settings, this tool offers plagiarism and AI content detection in a single platform. : Popular in academic and publishing settings, this tool offers plagiarism and AI content detection in a single platform. Turnitin AI Detection : A go-to tool for universities, integrated into the Turnitin plagiarism-checking suite. : A go-to tool for universities, integrated into the Turnitin plagiarism-checking suite. Copyleaks AI Content Detector : A versatile tool offering real-time detection with detailed reports and language support. : A versatile tool offering real-time detection with detailed reports and language support. OpenAI Text Classifier (now retired): Initially released to help users differentiate between human and AI text, it laid the groundwork for many newer detectors. With students increasingly using AI tools to generate essays and homework, educational institutions have turned to AI detectors to uphold academic integrity. Teachers and universities use these tools to ensure that assignments are genuinely authored by students. AI-written news articles, blog posts, and press releases have become common. AI detectors help journalists verify the originality of their sources and combat misinformation. Writers, publishers, and editors use AI detector to ensure authenticity in published work and to maintain brand voice consistency, especially when hiring freelancers or accepting guest submissions. Social media platforms use AI detection tools to identify and block bot-generated content or fake news. This improves content quality and user trust. Organizations are increasingly required to meet ethical and legal responsibilities by disclosing their use of AI. Detection tools help verify content origin for regulatory compliance and transparency. Despite their usefulness, AI detectors are far from perfect. They face several notable challenges: Detectors may mistakenly classify human-written content as AI-generated (false positive) or vice versa (false negative). This can have serious consequences, especially in academic or legal settings. As generative models like GPT-4, Claude, and Gemini become more advanced, their output increasingly resembles human language, making detection significantly harder. The majority of AI detectors are predominantly trained on English-language content. Their accuracy drops when analyzing content in other languages or domain-specific writing (e.g., legal or medical documents). Users can easily modify AI-generated content to bypass detection. A few manual edits or paraphrasing can make it undetectable to most tools. As AI detectors become more prevalent, ethical questions arise: Should users always be informed that their content is being scanned for AI authorship? Can a student or professional be penalized solely based on a probabilistic tool? How do we protect freedom of expression while maintaining authenticity? There is an ongoing debate about striking the right balance between technological regulation and user rights. Looking forward, AI detectors are expected to become more accurate, nuanced, and embedded into digital ecosystems. Some future developments may include: Built-in AI Signatures : AI models could embed invisible watermarks into all generated content, making detection straightforward. : AI models could embed invisible watermarks into all generated content, making detection straightforward. AI-vs-AI Competition : Detection tools may be powered by rival AI systems trained to expose the weaknesses of generative models. : Detection tools may be powered by rival AI systems trained to expose the weaknesses of generative models. Legislation and Standards : Governments and industry bodies may enforce standards requiring disclosure when AI is used, supported by detection audits. : Governments and industry bodies may enforce standards requiring disclosure when AI is used, supported by detection audits. Multi-modal Detection: Future detectors will analyze not only text but also images, videos, and audio to determine AI involvement across all content types. Detector de IA have become vital tools in a world where artificial intelligence can mimic human creativity with striking accuracy. They help preserve trust in digital content by verifying authenticity across education, journalism, and communication. However, as generative AI evolves, so too must detection tools—becoming smarter, fairer, and more transparent. In the coming years, the effectiveness of AI detectors will play a critical role in how societies manage the integration of AI technologies. Ensuring that content remains trustworthy in the age of artificial intelligence will depend not only on technological advancement but also on ethical application and regulatory oversight. TIME BUSINESS NEWS


The Verge
9 minutes ago
- The Verge
Max will show autoplaying video previews picked by AI
Max — soon to be called HBO Max again — is rolling out an update that automatically plays video previews when you hover over shows and movies. The video previews are launching to users in the US now, but Max is still working on an 'AI-assisted' tool to pick the clips that go in them. In the 'coming weeks,' Max says its AI tool will automatically identify 'standout' scenes from its shows and movies, which its team will then use to choose the clips featured in video previews. 'This AI-driven technology will enable us to identify Drop-in Moments for the vast catalog of content on Max and significantly reduce the manual time needed to curate previews,' Avi Saxena, Warner Bros. Discovery's chief technology officer, said in the press release. The streaming service will also include an option to turn off the audio or video in the autoplaying content, similar to Netflix. Max updated its interface in January to move the options from its top menu bar to the left side of the screen — basically the opposite of what Netflix is doing in its upcoming homepage redesign. Max will have another big change sometime this summer, as that's when the switch back to HBO Max will be official.


Forbes
9 minutes ago
- Forbes
AI Expands Our Capacity, But Are We Expanding Our Skills?
Asian woman watching hologram screens. Business and technology concept. Smart office. GUI (Graphical ... More User Interface). Nearly 25 years ago, our HR business partner team was facing a challenge similar to what we are experiencing today with AI adoption. As we transitioned to a shared services model, we recalibrated not just how work got done, but also what work was most valuable. The 'less valued' transactional tasks that most HR business partners wanted to offload, like answering questions about benefits, running headcount reports, and managing performance issues, were the very things that gave them their sense of worth. What appeared to be resistance to a changing strategy was actually resistance to a changing identity. As an HR business partner, I had been preparing for a more strategic role through graduate studies, doing some external coaching work, and practicing what I was learning with the leaders I supported. I knew the skills needed to be a 'strategic HR business partner', and while I didn't have all of them, I was clear on the gap I needed to close. However, many of my colleagues had not confronted that deeper question of what 'strategic' really meant and the skills they needed to acquire and practice to become that. Today, as AI promises to automate routine work, we're seeing the same pattern. On the surface, employees are resisting the use of technology and AI tools, but what they are ultimately resisting is the change to their identity. The roles they have played, the workflows they have been part of, even how they are expected to communicate is changinging, yet most leaders are not being clear about what those roles and that identity is evolving to. Is the new identity that we are supposedly carving out for them better than the one they have? Would it be better for them to resist and continue to have a healthy level of skepticism until they better understand what is expected in their new work context? Beyond identity, I am also seeing the skills gap that is not being addressed. How are organizations supporting employees in not only learning AI tools, but in helping them develop new skills that they need to learn with all that extra time? The cultural shift we underwent in HR so many years ago was a two-way contract. It was we, as HR Business Partners, agreeing to change, but also our managers and HR leadership clearly articulating what that change looks like and how they were going to help us get there. The option was clear: either we evolve to become more strategic, partnering with business leaders differently, move into a more operational role, or leave the company altogether. Some HR business partners chose to move into the shared services center when they realized they genuinely enjoyed the transactional work they had spent years trying to escape. Others left the company to continue doing the very work they tried to escape from for so many years. Most of us who stayed shared one major mindset shift. Rather than asking, 'Which tasks are being taken away?' we asked, 'Which behaviors and skills do I need to develop to deliver greater value to the business?' And we made sure we had a development system that was set up to learn and practice those skills with each other and our leaders. AI is taking over more of the transactional work, but are we ready to tackle the strategic work it now makes possible? Before rolling out another AI tool, organizations must demonstrate to employees how this shift benefits them as well as the business and provide them with the skills to work effectively alongside AI. The real question is no longer, 'What more can we get out of our employees with AI?', but rather, 'What more can we do for and with our employees and AI?' A recent Stanford study showed that while 83% of employees in China see AI as an opportunity for growth, only 39% of U.S. workers share that optimism. We are throwing tools, stats, and training at people in the name of more productivity and efficiency without connecting those things to what matters in an employee's day-to-day workflow—no wonder we are not excited in the US. The fundamentals of Change leadership include explaining why we are changing, what is changing, and how we are helping people change. Yet, every day, we continue to launch another internal AI sandbox and announce that usage will be tracked and measured. When has counting course completions ever guaranteed that employees are adopting the change and gaining the skills they need? Most organizations today see the opportunity of AI through two lenses: The productivity lens: 'Do more tasks per hour' and the efficiency lens: "Process every request faster." But what if we could be more productive and efficient while also giving employees additional opportunities to grow and develop those skills they will need to be more strategic when they are being more productive and effective with their time? What if instead of asking, 'How can we do more tasks per hour?" we ask: 'How can I use the time I get back from leveraging AI on more strategic work?" When people see that their company is leveraging AI as an amplifier to their work, rather than outplacing them, they will more readily dive into identifying opportunities where AI can free them up to focus on higher-impact activities. Here are two examples from Moderna and Klarna in how they have approached the 'why, what, and how' of leveraging AI with human augmentation. Moderna's shift to "work planning" from 'workforce planning' in the context of AI is a great example of this. Moderna's 'Why': create a more integrated strategic road map that not only increases opportunities for drug discovery but also allows employees to rethink workflows. Moderna's 'What': Develop over 3,000 tailored versions of ChatGPT, called GPTs, designed to facilitate specific tasks, like dose selection for clinical trials and internally, address basic HR questions related to performance, equity and benefits. Moderna's 'How': Redesign and reimagine how technology and people interface by merging HR and IT under one Chief People Officer. Moderna is also upskilling employees, helping them to leverage that productive time that they got back on more strategic work. What if instead of asking, 'How can we do more with less?' we ask: 'How do we ensure we are focused on the right priorities?' We know that being efficient alone isn't going to get us the results we want. Without deliberate human judgment, AI will only amplify hustle habits that keep us busy with everything, rather than prioritizing the few things that truly matter. Klarna's experience illustrates the risks of prioritizing efficiency solely over effectiveness. Their journey illustrates why focusing solely on speed and cost reduction, rather than both employee and customer value, ultimately leads to less effective results and a significant disconnection with employees. Klarna's 'Why': Klarna's CEO gave a clear 'why' to employees on using AI, but it didn't include developing or upskilling employees. It was solely focused on leveraging AI as much as possible to address customer service questions, disregrding the long-term impact and potential scenarios of letting go of 700 representatives. Klarna's 'What': After replacing customer service representatives with AI, they had to reverse course when they realized they'd "amputated" empathy from their customer interactions. Klarna's 'How': Today, they've adopted a hybrid approach where AI handles simple queries while humans manage the more complex cases. When AI handles the transactional load and we empower people to think strategically, we do more than speed up work; we redefine it. Instead of setting AI and humans in opposition, let's reinforce what they can accomplish together: higher productivity and efficiency, but also greater strategic leverage. In your next 1:1, open up a discussion about AI beyond how much they are using the tools. Ask them how they are leveraging their time now that AI is doing more of the tactical tasks. Ask them what support they need to keep doing that kind of strategic work. Clarify the skills you would like to see them build and ask how you can support them in developing those skills. It's through these kinds of discussions that we can continue to work strategically with AI, not against it, with humans leading the way.