logo
What should I do if my coworker is using AI unethically?

What should I do if my coworker is using AI unethically?

Fast Company21-05-2025

Welcome to Pressing Questions, Fast Company 's workplace advice column. Every week, deputy editor Kathleen Davis, host of The New Way We Work podcast, will answer your biggest and most pressing workplace questions.
Q: What should I do if my coworker is using AI unethically?
A: This is a question that feels new but is actually just an evolution of a classic workplace issue. You can slot any number of issues in the place of 'AI' and the problem is essentially the same: What's the best way to handle misconduct at work?
The answer for all situations, including this one, comes down to a few factors:
1. Do you know (or just suspect) your coworker is doing something they shouldn't?
2. Does the misconduct violate company policy or is it something you just don't agree with?
3. How severe is the misconduct? And is it a pattern or a one-off?
4. What is your relationship with the coworker?
Let's take this scenario through those checkpoints.
Are they actually doing something wrong?
The use of AI at work can be a contentious topic. Your first step should be to check your company's AI policy and make sure that the way you suspect your colleague to be using AI is actually in violation of the policy.
Typically companies have varying degrees of comfort around using AI for workflow and administrative tasks, including email, scheduling, and note-taking. If your company is okay with AI use for these purposes, there might also be a clause that the use of AI tools needs to be disclosed (for example: letting meeting participants know that you are using an AI notetaker).
Companies should also have guidance on using AI to complete the work itself (like in written reports or presentations, creating images, etc.). Again, at the very least, the policy should ask that employees credit and acknowledge work that was created by or with the help of AI.
If your company doesn't have an AI policy or it's too vague, your first stop should be with company leadership to suggest the need for clearer guidelines. While your coworker should have basic ethics and know better than to submit work that's false or fabricated or pass off AI work as their own, they can't be blamed for violating a policy that doesn't exist.
How severe is it?
Assuming the AI use is in violation of company policy, there are a couple of approaches depending on how severe it is and your relationship with your coworker.
Using AI to help write email responses is a lot different than passing off work that you didn't create or outsourcing quotes and data to AI without fact-checking. If it's a workflow process that you don't agree with but that comes down to a personal preference, you can either bring it up directly with your coworker or go to their manager.
As long as you feel comfortable and have a good relationship, going directly to the person should be your first step. Assume good intentions. Say something like 'I noticed you are using AI notetakers for our weekly staff meeting. I think that's against our AI policy because of privacy concerns. You might want to check with John about it and see if we can have an intern take notes instead.'
If you suspect someone is passing off AI work as their own, or submitting work with AI-produced errors, it's more of a delicate situation. If you aren't the person's boss, it's not for you to litigate, but before you make a potentially career-damaging accusation, do a little fact-checking.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Kioxia preps XL-Flash SSD that's 3x faster than any SSD available — 10 million IOPS drive has peer-to-peer GPU connectivity for AI servers
Kioxia preps XL-Flash SSD that's 3x faster than any SSD available — 10 million IOPS drive has peer-to-peer GPU connectivity for AI servers

Yahoo

time29 minutes ago

  • Yahoo

Kioxia preps XL-Flash SSD that's 3x faster than any SSD available — 10 million IOPS drive has peer-to-peer GPU connectivity for AI servers

When you buy through links on our articles, Future and its syndication partners may earn a commission. Kioxia aims to change the storage paradigm with a proposed SSD designed to surpass 10 million input/output operations per second (IOPS) in small-block workloads, the company revealed at its Corporate Strategy Meeting earlier this week. That's three times faster than the peak speeds of many modern SSDs. One of the performance bottlenecks of modern AI servers is the data transfer between storage and GPUs, as data is currently transferred by the CPU, which significantly increases latencies and extends access times. To reach the performance target, Kioxia is designing a new controller specifically tuned to maximize IOPS — beyond 10M 512B IOPS — to enable GPUs to access data at speeds sufficient to keep their cores 100% used at all times. The proposed Kioxia 'AI SSD' is set to utilize the company's single-level cell (SLC) XL-Flash memory, which boasts read latencies in the range of 3 to 5 microseconds, significantly lower than the read latencies of 40 to 100 microseconds offered by SSDs based on conventional 3D NAND. Additionally, by storing one bit per cell, SLC offers faster access times and greater endurance, attributes that are crucial for demanding AI workloads. Current high-end datacenter SSDs typically achieve 2 to 3 million IOPS for both 4K and 512-byte random read operations. From a bandwidth perspective, using 4K blocks makes a lot of sense, whereas 512B blocks do not. However, large language models (LLMs) and retrieval-augmented generation (RAG) systems typically perform small, random accesses to fetch embeddings, parameters, or knowledge base entries. In these scenarios, small block sizes, such as 512B, are more representative of actual application behavior than 4K or larger blocks. Therefore, it makes more sense to use 512B blocks to meet the needs of LLMs and RAGs in terms of latencies and use multiple drives for bandwidth. Using smaller blocks could also enable more efficient use of memory semantics for access. It is noteworthy that Kioxia does not disclose which host interface its 'AI SSD' will use, although it does not appear to require a PCIe 6.0 interface from a bandwidth perspective. The 'AI SSD' from Kioxia will also be optimized for peer-to-peer communications between the GPU and SSD, bypassing the CPU for extra performance and lower latency. To that end, there is another reason why Kioxia (well, and Nvidia) plan to use 512B blocks as GPUs typically operate on cache lines of 32, 64, or 128 bytes internally and their memory subsystems are optimized for burst access to many small, independent memory locations, to keep all the stream processors busy at all times. To that end, 512-byte reads align better with GPU designs. Kioxia's 'AI SSD' is designed to support AI training setups where large language models (LLMs) require fast, repeated access to massive datasets. Also, Kioxia envisions it being deployed in AI inference applications, particularly in systems that employ retrieval-augmented generation techniques to enhance generative AI outputs with real-time data (i.e., for reasoning). Low-latency, high-bandwidth storage access is crucial for such machines to ensure both low response times and efficient GPU utilization. The Kioxia 'AI SSD' is scheduled for release in the second half of 2026. Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

SMI CEO claims Nvidia wants SSDs with 100 million IOPS — up to 33X performance uplift could eliminate AI GPU bottlenecks
SMI CEO claims Nvidia wants SSDs with 100 million IOPS — up to 33X performance uplift could eliminate AI GPU bottlenecks

Yahoo

time29 minutes ago

  • Yahoo

SMI CEO claims Nvidia wants SSDs with 100 million IOPS — up to 33X performance uplift could eliminate AI GPU bottlenecks

When you buy through links on our articles, Future and its syndication partners may earn a commission. Now that the AI industry has exceptionally high-performance GPUs with high-bandwidth memory (HBM), one of the bottlenecks that AI training and inference systems face is storage performance. To that end, Nvidia is working with partners to build SSDs that can hit random read performance of 100 million input/output operations per second (IOPS) in small-block workloads, according to Wallace C. Kuo, who spoke with Tom's Hardware in an exclusive interview. "Right now, they are aiming for 100 million IOPS — which is huge," Kuo told Tom's Hardware. Modern AI accelerators, such as Nvidia's B200, feature HBM3E memory bandwidth of around 8 TB/s, which significantly exceeds the capabilities of modern storage subsystems in both overall throughput and latency. Modern PCIe 5.0 x4 SSDs top at around 14.5 GB/s and deliver 2 – 3 million IOPS for both 4K and 512B random reads. Although 4K blocks are better suited for bandwidth, AI models typically perform small, random fetches, which makes 512B blocks a better fit for their latency-sensitive patterns. However, increasing the number of I/O operations per second by 33 times is hard, given the limitations of both SSD controllers and NAND memory. In fact, Kioxia is already working on an 'AI SSD' based on its XL-Flash memory designed to surpass 10 million 512K IOPS. The company currently plans to release this drive during the second half of next year, possibly to coincide with the rollout of Nvidia's Vera Rubin platform. To get to 100 million IOPS, one might use multiple 'AI SSDs.' However, the head of SMI believes that achieving 100 million IOPS on a single drive featuring conventional NAND with decent cost and power consumption will be extremely hard, so a new type of memory might be needed. "I believe they are looking for a media change," said Kuo. "Optane was supposed to be the ideal solution, but it is gone now. Kioxia is trying to bring XL-NAND and improve its performance. SanDisk is trying to introduce High Bandwidth Flash (HBF), but honestly, I don't really believe in it. Right now, everyone is promoting their own technology, but the industry really needs something fundamentally new. Otherwise, it will be very hard to achieve 100 million IOPS and still be cost-effective." Currently, many companies, including Micron and SanDisk, are developing new types of non-volatile memory. However, when these new types of memory will be commercially viable is something that even the head of Silicon Motion is not sure about. Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Warren Buffett Has $90 Billion Invested in These 9 Artificial Intelligence (AI) Stocks. Here's the Best of the Bunch.
Warren Buffett Has $90 Billion Invested in These 9 Artificial Intelligence (AI) Stocks. Here's the Best of the Bunch.

Yahoo

timean hour ago

  • Yahoo

Warren Buffett Has $90 Billion Invested in These 9 Artificial Intelligence (AI) Stocks. Here's the Best of the Bunch.

Berkshire Hathaway owns only two AI stocks. However, Buffett's "secret portfolio" owns another seven AI stocks. The best Buffett AI stock should benefit tremendously as organizations build and deploy AI models in the cloud. These 10 stocks could mint the next wave of millionaires › Warren Buffett readily admits that he doesn't understand artificial intelligence (AI). He has also said that he won't invest in businesses that he understands. So, does that mean the legendary investor doesn't own any AI stocks? Nope. Actually, Buffett has invested roughly $90 billion in nine companies that are heavily focused on AI. Here they are -- and which one is the best of the bunch. Apple (NASDAQ: AAPL) ranks as the largest holding in Berkshire Hathaway's (NYSE: BRK.A) (NYSE: BRK.B) portfolio. Berkshire's stake in the iPhone maker is valued at close to $59.3 billion. Although Buffett significantly reduced the conglomerate's position in Apple last year, it still makes up 21% of Berkshire's total portfolio. AI has been at the forefront of Apple's development strategy for years. However, the company was seemingly left behind in the generative AI race until it launched Apple Intelligence in 2024. So far, though, Apple Intelligence doesn't appear to be igniting the super-cycle of iPhone upgrades that some analysts predicted. Amazon (NASDAQ: AMZN) is another top AI stock in Berkshire's portfolio, albeit a much smaller one. Berkshire owns around $2.1 billion of the e-commerce and cloud service giant's shares. Buffett didn't make the initial decision to buy Amazon; one of Berkshire's other investment managers (either Todd Combs or Ted Weschler) bought the stock. Most AI models run in the cloud. As the largest cloud services provider in the world, Amazon Web Services (AWS) has been a big winner as organizations scrambled to build and deploy AI models in the cloud. Amazon is also using AI extensively internally to increase efficiency and provide more services to customers. Apple and Amazon are the only AI stocks owned directly by Berkshire Hathaway. But I said that Buffett owned nine AI stocks. Where are the other seven? In Buffett's "secret portfolio." General Reinsurance acquired New England Asset Management (NEAM) in 1995. Three years later, Berkshire Hathaway acquired General Re. NEAM continues to manage investments for insurance companies. Its holdings don't show up in Berkshire Hathaway's regulatory filings, but any stock owned by NEAM is also owned by Buffett. Apple is the only AI stock in both Berkshire's and NEAM's portfolios. NEAM owns two other so-called "Magnificent Seven" stocks -- Google parent Alphabet (NASDAQ: GOOG) (NASDAQ: GOOGL) and Microsoft (NASDAQ: MSFT). Like Amazon, both Alphabet and Microsoft are major cloud service providers and are profiting from the strong AI tailwind. NEAM also has stakes in a couple of tech pioneers that are investing heavily in AI. IBM (NYSE: IBM) made headlines in the past with the success of its Watson AI technology. Texas Instruments (NASDAQ: TXN) isn't exactly a shining star in the AI world. However, the company makes edge AI products (AI deployed on local devices) and is working with Nvidia to develop power management and sensing technologies for data centers. The stocks of three AI chipmakers are also in NEAM's portfolio. Broadcom (NASDAQ: AVGO) manufactures AI products, including Ethernet switches designed to accelerate AI workloads and custom AI accelerators. NXP Semiconductors (NASDAQ: NXPI) and Qualcomm (NASDAQ: QCOM) sell products that support edge AI. If you're an income investor, Texas Instruments is probably the best pick among Buffett's nine AI stocks. Its forward dividend yield stands at 2.73%. Alphabet is arguably the most attractively valued AI stock in the group, with growth prospects factored in. The Google parent's price-to-earnings-to-growth (PEG) ratio is 1.36. I think the best Buffett AI stock all-around, though, is Amazon. The company is poised to profit as more organizations move their apps and data to the cloud. It still has significant growth prospects in e-commerce as well. Amazon is also expanding into new markets, including healthcare, autonomous ride-hailing, and satellite internet services. Ever feel like you missed the boat in buying the most successful stocks? Then you'll want to hear this. On rare occasions, our expert team of analysts issues a 'Double Down' stock recommendation for companies that they think are about to pop. If you're worried you've already missed your chance to invest, now is the best time to buy before it's too late. And the numbers speak for themselves: Nvidia: if you invested $1,000 when we doubled down in 2009, you'd have $368,190!* Apple: if you invested $1,000 when we doubled down in 2008, you'd have $37,294!* Netflix: if you invested $1,000 when we doubled down in 2004, you'd have $653,702!* Right now, we're issuing 'Double Down' alerts for three incredible companies, available when you join , and there may not be another chance like this anytime soon.*Stock Advisor returns as of June 9, 2025 John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Keith Speights has positions in Alphabet, Amazon, Apple, Berkshire Hathaway, and Microsoft. The Motley Fool has positions in and recommends Alphabet, Amazon, Apple, Berkshire Hathaway, International Business Machines, Microsoft, Nvidia, Qualcomm, and Texas Instruments. The Motley Fool recommends Broadcom and NXP Semiconductors and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy. Warren Buffett Has $90 Billion Invested in These 9 Artificial Intelligence (AI) Stocks. Here's the Best of the Bunch. was originally published by The Motley Fool

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store