logo
#

Latest news with #KarlJackson

Bersham Wheels event returning to Wrexham this summer
Bersham Wheels event returning to Wrexham this summer

Leader Live

time4 days ago

  • Automotive
  • Leader Live

Bersham Wheels event returning to Wrexham this summer

Bersham Wheels is returning for its fifth year on Saturday, June 21, promising a day of motoring and music at the Bersham Road site. The free festival will showcase sports and classic cars, motorcycles, and military vehicles, with a Ferrari Modena and Porsches among the highlights. Motorcycles will be on view (Image: Coleg Cambria) Karl Jackson, site lead and assistant principal for the £10 million Institute of Technology at Bersham Road, said: "I can't believe this is the fifth Bersham Wheels. "Time flies but it has been amazing to see the event go from strength to strength, with people of all ages visiting and enjoying the cars and bikes, the music and everything else we have going on. "While the focus is on the engineering and motoring aspect there is something for everyone and this summer is no different with some of the area's best bands and singers appearing, a selection of food and drink, and local businesses selling everything from craft items and jewellery to local produce and more. "Have your fingers crossed for sunshine. "We look forward to seeing you all June 21." READ MORE: Farmers Union of Wales reacts to the UK Government's latest EU deal The event will also feature guided tours of the site, live music, games, trade stands, and a wide selection of food and drink. On-site parking is free, and to book a stall, email or call 01978 267809. For free tickets and more details, visit Bersham Road Wheels Event Tickets on Eventbrite.

Table of Experts: Succeeding with AI
Table of Experts: Succeeding with AI

Business Journals

time6 days ago

  • Business
  • Business Journals

Table of Experts: Succeeding with AI

expand Since the advent of the internet in the 1990s, nothing has caused as much excitement and anxiety in the business community as artificial intelligence (AI). Organizations know they need to embrace it, but often they are not sure how — both because it is new and because it is evolving so rapidly. Driven by large-language models, generative AI provides an opportunity to transform jobs, which can cause fear in the workforce, as well as creating new avenues for business growth. In order to better understand the steps companies can take to become AI-enhanced businesses, The Milwaukee Business Journal recently assembled a panel of experts to discuss what companies should know about AI, including risks, opportunities and effective strategies for deploying it within an organization. Moderator: AI and how it can be used can be overwhelming. Many businesses are either frozen in place not knowing where to start or are jumping from one potential use case to another. Either way, they are not realizing the benefits that AI has to offer. Can you provide a realistic lens for businesses to view generative AI and what it can do for them? Karl Jackson: The companies being most successful with AI tackle the opportunity and technology together. They are intentional in use-case identification — looking at processes and products to see how generative AI can automate some of the human work and then deriving a business case from that. They also enable experimentation for the engineering teams, call centers and others who are actually doing the work. Having your teams experimenting with the technology while collaborating to identify use-cases will help you identify the most business value. Alli Jerger: You have got to start with a problem because you can't find the solution if you don't know what the problem is. AI is just a tool. Granted, it is a tool that has the opportunity to really transform things, but it is still just a tool. Using it without identifying a problem is wasteful and frustrating. Nonetheless, some companies are doing just that because they fear being left behind if they don't get on the AI bandwagon. There is some level of truth to that, so you do have to get started. At the very least, you need to put in place some sort of technology adoption plan that covers the organizational change that needs to happen for your organization to embrace AI. Michael Gentry: That plan is about taking a step back and asking whether your company is ready to approach this question. Are you ready to do a pilot program? What are the steps you need to take so that the right people have a voice at the table to properly assess an AI tool? There should be an internal governance document that includes review mechanisms and clear accountability. There should also be an employee-facing policy so that employees know what the company is doing and what they can or cannot do. Moderator: One of the challenges with AI is that it is being exposed simultaneously to employees of all levels in multiple departments of the organization. How does a company go about developing an effective procedural and legal roadmap when there are so many potential inputs? Gentry: It depends on the size of the business and the group of employees that are going to be implementing the technology. You have to start with who needs to be at the table initially. You have to ask some specific legal questions. What is the data you are going to use and what are the sources of law that will attach to that data? What are the uses going to be? You need some overarching policies that prohibit employees from willy-nilly using publicly available generative AI tools. You want to make sure employees aren't exposing confidential information. And you want to make sure that employees who are empowered to use AI are responsible for making sure that the information generated by the AI tool is correct. Jerger: We like to focus on the idea of 'guidelines and guardrails' versus 'restrictions and ramifications' to allow for the proper use of AI without stifling innovation. Ultimately, companies have to assess their culture, their strategy and their appetite for change and risk. From a corporate culture perspective, how does the organization view new technologies? Do they have internal legal counsel and if they do, do they have enough expertise in this rapidly developing space? As Michael said, a cross-functional governance team is really important. Jackson: We lean on some of our traditional change-management processes. We start with education and preparation, followed by adoption and finally acceleration. We discuss potential legal ramifications at the beginning, in the education and preparation phase. We also recommend putting together an acceptable-use policy as well as AI governance and responsibility councils. The key is making sure that everyone who needs a seat at the table has a seat at the table including security, compliance, legal, engineering and core business functions. And, like Allie said, you want to lean towards guardrails and guidelines so that you do not stifle innovation. Moderator: What should a corporate AI strategy include to ensure that AI can be innovatively used within the organization while protecting the company's data and legal obligations? Gentry: The process should lay out the questions that need to be asked of the vendors and include the standards the organization is going to have for the vendor contracts. Make sure vendor contracts are careful around indemnification and how risk is assigned. The process should include stakeholders that understand where the data is stored. It should also include the legal team to make sure you are meeting your compliance guidelines and whether there is a discriminatory impact from using a particular AI product in the context of sensitive decisions, including hiring. Jerger: The first question to ask is how the company's vision for the use of AI connects with the larger strategic vision of the organization. Make sure there is a governance framework that is flexible enough to change with the times. Data protection is real. We need to protect our data and our customers' data. We also have to make sure the data does not have a bias. Everyone using these tools needs to have a base awareness of the potential pitfalls — which gets back to the technology adoption process we were talking about. Finally, we need to let people know it is okay to use AI. It is not cheating to have AI rephrase an email for an audience. It is not cheating to use AI to develop a job description. Make sure that employees have permission to use AI as long as it falls within your established guidelines. Jackson: You should split up the different types of risk because they might have different solutions. Dealing with things like corporate data leaks, privacy breaches or customer data exploitation has a very specific gatekeeping-technology solution. Output biases and hallucinations (when generative AI produces incorrect results) have different solutions. Intentional education for your employees is also incredibly important. You can create the best policy, but it will be hard to enforce if no one knows it. Moderator: How has the AI landscape changed under the Trump Administration? Jerger: The Trump Administration is focusing on deregulation. They revoked Biden's executive order that put guidelines around the ethical use of AI. In the education sphere, there is a relatively new executive order that focuses on the integration of AI literacy in the K-12 space that will give instructors the tools they need to build tomorrow's workforce. Gentry: Under the Trump Administration, a number of agency-promulgated guidelines around the use of AI have been withdrawn. The Trump Administration wants to be seen as pro-development and especially as pro-American development of the technology. As the federal government steps back, companies will be left with an inevitable patchwork of state regulations and laws. Colorado was an early adopter of a comprehensive law on the uses of AI. I think other states may pass more narrow legislation, so you have to look at AI issues on a state-by-state basis. I do think you are likely to see further regulation of chips and the other tools necessary to build large language models (LLMs). The government justifies some of that regulation based on national security concerns because we want to be the leader in the field of AI development. Jackson: Many of the companies we work with are multinational corporations, and what they want to know is how to stay in compliance with the plethora of AI regulations coming out globally. Unlike other areas of data use, you can't just take the most restrictive version of legislation and build to that. The AI ecosystem is moving too fast to do that. You really have to look at the different use-case areas and line each one up to the legislation that impacts them. Moderator: Many employees see AI as a threat to their jobs. How real is this threat? Jerger: Someone once said, 'AI is not going to replace your job, but the people who embrace AI are going to replace you.' We know that AI is going to change jobs. It is going to eliminate jobs, but it is also going to create new jobs in whole new industries. The real threat is the large labor gap facing a number of industries, which is why companies are trying to embrace these tools. AI should not be used to replace the workforce but to work alongside the workforce. By automating repetitive, predictive and non-complex tasks, AI-generative tools can leave time for tasks that need more human discernment. Jackson: It is a time of great change. Kent Beck, a software engineer and one of the original endorsers of the Agile Manifesto, said of generative AI: 'The value of 90% of my skills just dropped to $0. The leverage for the remaining 10% went up 1,000x.' AI will not take work away from us. It will allow us to go deeper into backlogs and to deliver more while bringing more experimentation and innovation. Gentry: I haven't seen a lot of employee displacement because of AI yet, but it is something that needs to be watched because of the development of agentic AI tools. From a legal perspective, it is important for companies to document why they are making employment decisions involving AI in case they need to justify them in the future. Moderator: With AI tools evolving so rapidly, how can businesses decide which tools are for them? Jackson: I think you have to tackle it from both the top down and the bottom up. From the top-down perspective, you have to ensure that your IT infrastructure can meet the demand of AI operations. AI tools should go through the same type of rubric that IT tools go through — not just for efficacy and risk, but also for cost and ROI. You have to decide which tools you are going to buy, which you are going to build and which you are going to rent. From the bottom-up perspective, we support the idea of creating AI test beds — safe and secure places that allow you to deploy and test new tools. These secure places might include restricted versions of public LLMs or locally hosted LLMs. We enable teams to use these tools and do a lot of hands-on experiments to get insights from the user base as to what would be most impactful for their work. Jerger: It should be approached with an established process like any other IT implementation. You need to evaluate the vendor's stability. How viable are they? Will they be around next year? You need to do some extra homework if you are going to bank your company's future on the use of a tool from an organization that has not been around very long. And you have to be constantly evaluating what is available because new AI tools are coming out every day. Gentry: Generally, we defer to clients' internal stakeholders when it comes to the selection of products and vendors, unless it has to do with contract terms. But I would emphasize it is important to have policies and procedures in place so that you have a good foundation from which to start and carefully consider the potential legal and business risks that may arise in situations where data integrity may be compromised. Moderator: What are some of the most common pitfalls you see companies make when it comes to their use of AI? Jerger: Jumping in too quickly and starting with the tool instead of the problem are two of the most common pitfalls. It also is important to remember that AI is not a launch-and-forget kind of thing, and that data quality, security and integrity are bigger considerations than many realize. Finally, don't discount employee and customer concerns, and live by the phrase 'trust but verify.' Gentry: One of the pitfalls is looking at AI with a black-and-white framework — that you are either all-in or all-out. Somewhere on the continuum there is going to be an AI product that can help a company reduce inefficiencies. Also, don't assume that all AI products contain the same basket of legal and compliance issues. Most of our conversation has focused on LLM-based generative AI, but there are a number of other products that have been around for a while, including resume scanners that can raise bias concerns, and employee monitoring software, which became popular during COVID and are still used today. They have a separate basket of issues — privacy concerns and employee rights, for example. In short, not all AI products should be treated the same way from a legal perspective. Jackson: A common pitfall is having a lack of clear objectives. You can also get stuck in the proof-of-concept experimentation mode and end up not taking anything to production. Another common pitfall is underestimating the change management required. If you broaden the AI conversation outside of generative AI, make sure you have the discipline and rigor needed to operationalize and monitor what you are doing. Do you have model explainability and transparency so that you can generate the right audit logs about why a model has made the decisions it did? That kind of foundation is important. Moderator: What are the one or two things that you would like readers to take away from this discussion? Jackson: Are you investing in a culture of continuous learning? The organization of the future will be an adaptive organization, and because of AI, this is a great time to build that adaptability, that resilience to change. The highest-performing technical teams in my world are the highest-performing learning teams because they can absorb and learn from the change that is happening around them. Gentry: Don't stick your head in the sand or keep it in the sand any longer. It is time to start with AI on some level, at least having a plan for how you want to use AI. What are the problems you would like to have addressed — if not now, then sometime in the future? Second, look for the people within your organization who are excited about how AI can help the company over the long term. Jerger: First, don't try to do everything at once. Start small. Identify one or two low-hanging fruits where AI can help solve a problem or streamline a process. Build some early wins to demonstrate AI's value, then review, review and review. Finally, remember that AI is a tool, not a magic wand. It is not the threat that people think it is. And it is here to stay. Experts Alli Jerger, associate dean, Business Information Technology, Waukesha County Technical College. As a passionate higher education leader and technologist, Jerger thrives at the intersection of innovation, learning and student development. As associate dean at Waukesha County Technical College, she has combined her background in IT and program development to help students and institutions adapt to the evolving demands of the digital age. Michael Gentry, shareholder in Labor and Employment; Data Privacy and Cybersecurity; and Artificial Intelligence practices, Reinhart Boerner Van Deuren s.c. Gentry helps clients tackle complicated problems related to their workforces, including protecting their systems and proprietary data —through counseling, contracts, strategy, negotiations and litigation. He regularly assists clients through all stages of employment counseling regarding discipline, leave and termination decisions, as well as tailoring policies and employee-facing agreements to meet their employment, proprietary data and security needs. Gentry is also a member of the firm's Data Privacy and Cybersecurity Group and Artificial Intelligence (AI) Group, where he leverages his litigation experience to help clients prevent data theft and wire fraud and to realize opportunities of advancing AI technologies while complying with international and U.S. data privacy laws and protecting their data. As a certified information privacy professional (CIPP/US), he is able to bring even more value to his clients in these rapidly evolving fields. Karl Jackson, managing director, Slalom. Jackson is a seasoned software engineering leader with over 20 years in the IT industry. He blends startup curiosity with enterprise discipline to drive innovation across modern delivery practices and AI-powered transformation. Passionate about collaboration and craft, Jackson thrives at the intersection of technology, people and meaningful client impact.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store