Amazon's Delivery Drones Are Crashing, and We Finally Know Why
Amazon, the multi-trillion dollar e-commerce monolith, seemingly cheaped out on on a key feature installed on its six-propeller delivery drones. Predictably, this backfired almost immediately.
On a (lightly) rainy December day at the company's testing range in Oregon, not one but two Prime Air drones suddenly stopped spinning their propellers mid-flight and plummeted some 200 feet to the ground. The crashes, which destroyed both aircraft, happened within minutes of each other.
And now, Bloomberg reports, we know why. According to documents from the National Transportation Safety Board, bad readings from the drones' onboard lidar sensors led the drones to believe they had already landed. Their software, thinking it was on solid ground, cluelessly cut off power to the propellers.
But that's not all. After Amazon decided to remove them, the drones no longer had backup sensors that were equipped on older versions. And these probably would've prevented the drones from shutting down, per Bloomberg's sources.
Amazon denies this line of thinking.
"Bloomberg's reporting is misleading," an Amazon spokesperson told the newspaper. "Statements that assume that replacing one system with another would have prevented an accident in the past is irresponsible."
Per the NTSB documents, a botched software update made the lidar sensors more susceptible to being thrown off by rain. Lidar is short for light detection and ranging, a form of technology that uses lasers to scan surroundings in a similar manner to radar.
A glitch may have been the main reason, but it definitely sounds like it could've been easily avoidable had Amazon kept the redundant system from its previous drone, the MK27.
The backup sensors came in the form of two metal prongs on the bottom of the drone called squat switches. When the drone lands, the switches are depressed, providing confirmation that it's on solid ground. A source told Bloomberg that the MK27 drone's software was originally designed to confirm a landing only when two of its three sensors agreed.
This was removed with the MK30, and it's unclear why. Reducing costs could be one reason, and it's not uncommon to remove redundant systems to streamline a device.
It could also be because of a shift in how Amazon intends to deliver packages with its drones, according to a Bloomberg source. The MK27 was designed to make deliveries by landing in a customer's yard, with enclosed propellers to make them safer. The MK30 moved away from this and drops packages from around a dozen feet in the air.
This is far from the only setback Amazon has faced during its drone development, which it first announced back in 2013. In 2021, a drone crash sparked an acres-wide blaze in Oregon, and outside of incidents like that, many residents living where the drones are being trialed simply find them annoying. After the latest December SNAFU, Amazon halted future experiments for months — though it maintained that the crashes weren't the "primary reason" — and only recently lifted the pause.
Overall, development has been sluggish and the project remains years behind schedule. Currently, Amazon is only carrying out drone deliveries in College Station, Texas, and Tolleson, Arizona.
More on Amazon: The NYPD Is Sending Drones to the Sites of 9-1-1 Calls

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
18 minutes ago
- Yahoo
If Elon Musk's Wealth Was Evenly Distributed Across America, How Much Money Would Every Person Get?
We've seen the headlines that reveal how rich the world's top billionaires are — but it's hard to comprehend just how rich they are. Consider this: Let's say you had $1 billion in your bank account and had to spend $100,000 every day, for an entire year. After 365 days, you would still have $963,500,000 (nine hundred sixty-three million five hundred thousand). Discover More: Find Out: Over the last two decades, billionaires have ballooned their wealth to unparalleled levels. In 2005, Microsoft co-founder Bill Gates ranked as the world's richest person, with a net worth of $46.5 billion, as reported by CNN. Today, that title belongs to Tesla CEO Elon Musk, whose net worth stands at $368 billion as of June 5, according to the Bloomberg Billionaires Index. Even when adjusted for inflation, Gates' former net worth would be equivalent to roughly $76 billion in today's dollars. It is worth noting that other billionaires have also increased their wealth during the same time. For instance, tech billionaires Mark Zuckerberg and Jeff Bezos are worth $229 billion and $227 billion, ranking second and third globally. For many Americans, this trend is not sitting well. The sky-high cost of living has catalyzed support for redistributive tax policies, especially among younger voters and the progressive base of the Democratic Party. While higher taxation may or may not happen in the years to come, here's hypothetically how much you'd get if the world's richest man gives a check to every American. The United States Census Bureau estimates the current population to be around 341 million people, ranking only behind India and China. If Musk's enormous $368 billion were equally divided in the U.S., each person would receive $1,079 (rounded to the nearest dollar). A couple would receive $2,158, while a family of four would get $4,316. Despite the enormous wealth of billionaires, much of their fortune is tied up in stocks, real estate, and other holdings. Only a small percentage of their assets is held in cash. Based on data from Forbes, Musk has a 12% ownership stake in Tesla and to date, he remains the largest shareholder in the $1.15 trillion electric vehicle company. This is in addition to a 42% slice in SpaceX and a 54% interest in xAI, among many other businesses. Interestingly, Bloomberg reported that Musk's financial holdings appreciated by 77% after joining the campaign trail with President Donald Trump late last year, as reported by Bloomberg. Investors became bullish on Tesla and Musk became the first person to ever reach a net worth exceeding $400 billion. Since then, Tesla's market value has fluctuated as a result of volatile market conditions, macroeconomic factors and the threat of a global trade war. Editor's note: Data is accurate as of June 5 and is subject to change. More From GOBankingRates Mark Cuban Says Trump's Executive Order To Lower Medication Costs Has a 'Real Shot' -- Here's Why This article originally appeared on If Elon Musk's Wealth Was Evenly Distributed Across America, How Much Money Would Every Person Get?
Yahoo
22 minutes ago
- Yahoo
What to Expect at WWDC 2025: Major iOS 19 Revamp, Apple Intelligence Under Pressure
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing. In less than two weeks, I'll be on a plane to California along with PCMag's software expert, Michael Muchmore, for Apple's Worldwide Developer Conference (WWDC), where we expect to see a design overhaul for iOS 19 and maybe a surprise or two. The event runs from June 9-13 and begins with a keynote at 10 a.m. PT / 1 p.m. ET on Monday, June 9, at Apple Park in Cupertino. It will be live-streamed on the app, the , and the . As a developer-focused event, WWDC is all about software. Apple provided a first look at Apple Intelligence during WWDC 2024, but the company is still working on fulfilling its AI promises. It released a few features, like a ChatGPT integration, notification summaries, and Writing Tools, but is still struggling to deliver a big Siri revamp (prompting several false advertising ). Will Apple acknowledge these shortcomings at WWDC or breeze right along with details about upcoming AI features? WWDC typically includes the introduction of Apple's next-gen operating systems, so developers have a few months to play around with them before a formal launch in the fall. This year, that's iOS 19, iPadOS 19, and macOS 16. Apple is reportedly planning a major user interface (UI) overhaul for its OSes called Solarium, according to Bloomberg's Mark Gurman. It will bring design elements from the Vision Pro to other Apple devices, which could mean more translucent backgrounds and circular app icons. YouTuber Jon Prosser last month also hinted at rounded corners on options within dialog boxes, app menus, and search bars, plus a floating translucent navigation menu and the repositioning of the search bar to the bottom of the screen. This could help usher in Apple's reported shift to "visual AI." In March, CEO Tim Cook reassigned the Siri revamp to Vision Pro creator Mike Rockwell, and we've heard rumors about Apple's ambitions to put cameras in AirPods and Apple Watches. Apple has reportedly abandoned plans for the latter, but AirPods are still on the table and might receive a live translation feature with iOS 19. Concrete features and new hardware experiences would be welcome after the letdown of Apple Intelligence. But while the Vision Pro might set the tone for Apple's next evolution, the Vision Pro itself remains too expensive for the average user at $3,500. Apple is reportedly working on a more user-friendly pair of glasses to compete with Meta's Ray-Ban specs and Google's Android XR glasses, but that's not expected to debut anytime soon. As arguably the most "fashionable" of the tech companies, we expect Apple to ultimately nail the style side of this. The question is how much advanced Vision Pro tech can it fit into pared-down smart glasses? Until then, we expect Apple to announce visionOS 3 at WWDC. Earlier this year, Gurman said the next visionOS is "pretty feature-packed," but details are scant. Other iOS 19 rumors, meanwhile, include an AI-based battery-management feature. It might also talk up plans for an "AI doctor" and Health app revamp or new AI partners, like Google's Gemini. We won't see new iPhones until the fall (hopefully without a significant tariff-related price hike), but the iPhone 17 lineup will likely support whatever Apple introduces at WWDC. Personally, I'm looking forward to seeing the 175-acre Apple Park for the first time, the ideal mothership for a few days of Apple geekdom.
Yahoo
22 minutes ago
- Yahoo
ChatGPT
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing. A trailblazer among since its launch in 2022, ChatGPT continues to innovate and mature. It largely bests other chatbots in terms of the accuracy and detail of its replies, and it excels at searching for and sourcing up-to-date online information. Advanced reasoning and writing abilities, comprehensive research capabilities, helpful file processing options, and top-notch image generation tools round out its impressive feature set. ChatGPT can (and will) get things wrong from time to time and doesn't offer the productivity tie-ins of Copilot or Gemini, but it still earns our Editors' Choice award thanks to its all-around proficiency and ever-improving responses. Broadly speaking, ChatGPT is an AI chatbot you interact with via text or your voice. It can answer questions, do research, generate creative writing, handle math and science problems, make images, process files you upload, search the web, and much more. Features like contextual understanding and memory, among others, differentiate ChatGPT from the initial, simpler versions of Amazon's Alexa and Apple's Siri. I find ChatGPT most useful for answering questions and doing research. Rather than tacking on 'Reddit' at the end of a Google search and scrolling through forum post after forum post, I can usually find what I need faster with ChatGPT. AI chatbots are imperfect technologies, though. If you look to them for information, you need to keep in mind that they can confidently get things wrong or make stuff up. I recommend checking sources outside of ChatGPT for anything mission-critical. ChatGPT is like a complicated prompt-response equation with access to information on everything from basket weaving to quantum physics. The technology behind the scenes is primarily OpenAI's 4-series and o-series of that comprise trained on massive datasets. ChatGPT doesn't just rely on whatever data it was trained with, though, and can search the internet for up-to-date information. The 4-series is OpenAI's conversational, general-purpose line of models. The o-series excels at reasoning and problem-solving, making it a good fit for coding, math, or science. Each series has individual models, each with . For example, GPT-4o mini is OpenAI's fastest model, while GPT-4o is slower but supports more advanced features, such as file uploads. The latest additions to ChatGPT's lineup are the o3 and o4-mini models. As you use ChatGPT, you train its underlying models. In that sense, ChatGPT learns over time, and its performance isn't static. In other words, OpenAI doesn't have to add new features or introduce new models to make ChatGPT's responses more accurate, detailed, or relevant. However, this doesn't necessarily mean ChatGPT can't make the same mistakes as you interact with it or that you always notice major changes from one week to the next. You can use ChatGPT for free, but paid plans for individuals, teams, and large organizations are also available. The Free plan gets you unlimited access to GPT-4o mini and limited access to GPT-4o and o4-mini models. You can use custom GPTs and search the web with ChatGPT, too. You also get limited access to features like data analysis, deep research, file uploads, image generation, and advanced voice mode. Paid plans include Plus ($20 per month), Pro ($200 per month), Team ($25 per user per month, billed annually), and Enterprise (custom pricing). The limits on the Plus plan are fairly generous, though, and features gated behind higher-tier plans, like o1 pro mode or custom workspace GPTs, are relatively niche. As such, Plus is the plan most people should use. For this review, I tested the Plus plan. Plus gets you access to a variety of models not available in the Free plan, such as GPT-4.5, o3, o4-mini, and o4-mini-high. It extends limits on data analysis, deep research, file uploads, and image generation. It also unlocks the ability to create custom GPTs, projects, and tasks, and offers limited access to the Sora video generation feature. For comparison, both Google's top-tier version of with Gemini Advanced and Microsoft's Pro cost $20 per month. However, you also get bonuses like 2TB of storage or Copilot functionality in apps, so ChatGPT feels a little lacking outside of chatbot-related features. ChatGPT is available on the web, though you can also download dedicated apps for mobile (Android and iOS) and desktop (macOS and Windows) devices. OpenAI also has an official ChatGPT extension for Google Chrome, which makes ChatGPT your default search engine. Official extensions aren't available for Firefox or Safari, but third-party extensions are. Beyond ChatGPT's apps and web interface, you can access the chatbot indirectly in lots of ways. For example, , while Microsoft's Copilot uses the same 4-series of models as ChatGPT itself. Many different sites and services use OpenAI's models, like and Perplexity. You won't always see explicit ChatGPT branding, however, and OpenAI likely doesn't take part in their development. These also tend not to have access to the full suite of ChatGPT features. You can use ChatGPT without an account, but signing in unlocks many more features, including the ability to see your chat history and upload files. ChatGPT's dashboard is uncluttered. It presents you with an Ask Anything field front and center with various options, such as dictation or searching the web. A menu on the left shows your chat history alongside links to custom GPTs and the Sora video generation feature. If you're not sure what you can use ChatGPT to do, OpenAI conveniently places buttons like Analyze, Brainstorm, Code, and Create Image below the central search field. You can click these to see sample prompts or, alternatively, simply ask ChatGPT directly what it can and can't help you do. I appreciate the visual representation of the former. You can ask ChatGPT pretty much anything, and responses are usually quick. However, speed does fluctuate depending on usage. In my experience, ChatGPT sometimes hangs up while generating responses or presents an error message midway through the process. You can fix these issues by clicking the stop button at the bottom right of the chat box and resending your message. , but OpenAI is constantly . You can customize ChatGPT's personality yourself, too. It's possible to specify what name it calls you, to give ChatGPT certain traits, and have ChatGPT keep in mind any information about you that you deem relevant. In general, if you don't like ChatGPT's tone, you can change it. OpenAI recently announced that lets it remember everything you tell it and reference prior conversations. That feature didn't work for me, however, so it might still be in the process of rolling out across ChatGPT's plans. The buttons below responses allow you to copy, read aloud, or regenerate messages. To share a whole chat, click the Share button at the top right of the interface. Deep research and generated images also get shareable links. Voice mode is similarly easy to use. Just click on the waveform at the bottom right corner of your chat box to get started. In voice mode, you can choose between different voices, and you have the option to mute your microphone when you want ChatGPT to stop listening. Otherwise, you can simply talk to ChatGPT, and it responds automatically. Voice mode is convincing and lifelike, just like . Searching the web is a standard feature of AI chatbots. Whether it's ChatGPT, Copilot, or Gemini, all of the chatbots I tested answered questions about current events at the time of testing correctly, such as when Oblivion Remastered came out, who the current Prime Minister of Canada is, or who the current pope is. That said, ChatGPT consistently had the best responses. For example, Copilot and Gemini told me when Pope Francis passed away and noted when the next papal conclave begins, but ChatGPT gave me information on his funeral, how the papal conclave works, and potential successors. It even included images of Pope Francis for context. Sourcing is better on ChatGPT, too. All three chatbots provided sources, but ChatGPT gave me easy-to-read article links at the bottom of its responses. Moreover, simply hovering your cursor over in-text citations reveals clickable articles. The interfaces of Copilot and Gemini aren't nearly as accessible in terms of sources. However, you have to be careful with ChatGPT. If you forget to force it to search the web by clicking the Search toggle, you must word your question carefully. If you ask who the pope is, ChatGPT tells you it's Pope Francis, but if you ask who the current pope is, ChatGPT automatically searches the web and gives the correct answer. Deep research via ChatGPT is, simply put, incredible. This feature allows you to ask a question or pick a topic for ChatGPT to research and then generate a report on. In my experience, these reports often end up dozens of pages long with upward of 50 or 60 sources. I did deep research on everything from choosing a shower head to figuring out every BIOS setting I need to tweak when overclocking a Ryzen 7 9800X3D. In my estimation, a deep research report is the equivalent of spending an hour Googling, reading articles, and scrolling through forum posts. It does come to incorrect conclusions sometimes, but so can I after online research. Gemini, like ChatGPT, also does deep research for free. I used them both to research why my copy of OpenRGB, a program that controls RGB lighting, wasn't loading the profile I created. You can check out and reports. Both are comprehensive and contain the solution to my problem. Deep research works differently across these chatbots, however. ChatGPT, for example, follows up your deep research prompt with clarifying questions. I found this especially helpful in the context of my OpenRGB prompt. Even when I asked for them, Gemini doesn't ask clarifying questions. That said, Gemini presents you with a research plan that you can edit before it researches, which ChatGPT doesn't. Clarifying questions are more useful for research on something specific, but setting up a research plan is more useful for broader topics. Both chatbots handle sourcing differently, too. ChatGPT shows you how many total searches it does and how many sources it actually uses, as well as puts in-text links throughout its report. You can hover over these with your cursor to get more details. These links even go so far as to highlight the specific, relevant text when you open the source. This makes fact-checking incredibly easy. Gemini lists sources below each major section of the report, footnoting certain sources in the text, and then gives you the complete list of sources and searches at the end. This system makes it much more difficult to connect the sources it cites to the claims it makes. So, although Gemini's deep research tool tends to use more sources than ChatGPT's in my experience, I prefer ChatGPT's sourcing approach. Gemini does have some quality of life features that ChatGPT doesn't, such as cleaner formatting and a one-click way to export your report to Google Docs. The deep research interface on Gemini is also clearer and makes better use of screen real estate. ChatGPT reduces deep research to a loading bar and locks its research activity away in a menu on the right side of the screen. It's more difficult to parse and feels comparatively cluttered. Lastly, the tone of deep research across ChatGPT and Gemini is different. Gemini reports read like academic papers, while ChatGPT's reports feel more like a guide you might read online or an elaborate Reddit post. I found ChatGPT's reports more engaging, but you might disagree based on your preferences and research topic. You can generate images with ChatGPT, just like with Copilot and Gemini. To start, I tested the chatbots' abilities to create photorealistic images. I used the following prompt in ChatGPT (GPT-4o), Copilot, and Gemini (2.0 Flash): "Generate me a photorealistic picture of the interior of a log cabin. I want to see a wooden table and chairs in the center with yellow, ceramic dinnerware on top." Here are the results: Gemini's image looks the best at a glance, until you notice the fairly obvious distortion in the overhanging lights. Copilot's image is serviceable, but the walls of its cabin don't look quite right. The cutlery in ChatGPT's image shows noticeable distortion, but it's overall the best image. You might not even notice the distortion in the fine details until you look closer. Next, I asked the chatbots to generate a comic: "Generate me a six-panel comic of a cyberpunk world, but you're going to spice it up: I want a retrofuture cyberpunk that feels like the 1960s meets the 2400s. And I don't want humans, I want lizard people fighting an invading force of fish people. Make sure the last panel has a major twist." Here are ChatGPT's (first slide), followed by Copilot's and Gemini's images (left to right on the second slide): Copilot clumsily inserted text, made me ask to generate it twice, and created just four panels. Gemini generated more panels than I asked for, and the story its images tell has the two sides fighting themselves. ChatGPT, at least, gave me the six panels I asked for, and they roughly encapsulate my prompt, even if there isn't a twist and the story it tells isn't particularly coherent. My last test was to generate a diagram, something that ChatGPT often offers to do when you chat with it. My prompt was: "I have an HDMI splitter, a PC, a PlayStation, and two displays. Draw me a diagram that shows me how I can set these devices up to play my PlayStation on one display, while I record and monitor the footage on my PC and the other display." Copilot told me it can't create technical diagrams, while Gemini's diagram is utter nonsense. ChatGPT's diagram is much more legible than Gemini's, and some of what it generates is correct, such as the chain from the PlayStation to the splitter to the display. Results disappoint altogether, but ChatGPT produced the closest to what I wanted. AI video generation isn't quite as mainstream a feature of AI chatbots just yet, but you do get limited access to Sora video generation with a ChatGPT Plus subscription. Sora somewhat delivers on its video generation promise, but still struggles with the same things AI image generation struggles with, such as hands and fingers. Here, I asked Sora to generate . You can upload files to ChatGPT, including a resume that needs critiquing, text that needs translating, or something else entirely. Uploading a file seems simple, but processing images and understanding documents is actually quite complicated. As a test, I asked ChatGPT (GPT-4o), Copilot, and Gemini (2.0 Flash) to identify as many components in my computer as possible. I also asked that this analysis not include any context from my prior conversations. I sent a picture of my computer with the glass side panels attached, so the chatbots needed to deal with the reflections. All chatbots incorrectly identified certain components, and most identification was generic, like 'CPU water block' or 'graphics card.' However, Copilot's response was particularly lacking in how short and vague it was. Gemini didn't do much better but was, at least, able to identify Aquacomputer's Leakshield from its visible branding. ChatGPT correctly identified the case, a Lian Li O11 Dynamic Evo XL, and the fans, Noctua NF-A14 ChatGPT's response was also the most detailed across all the components it listed. But what if you want to upload a PDF of a textbook and ask some targeted questions instead of flipping through the pages yourself? To test this functionality, I provided ChatGPT, Copilot, and Gemini with manuals for my motherboard, my motherboard's BIOS, and the Leakshield protective system. Then, based on the submitted materials, I asked them to tell me if the Leakshield needs Windows to operate and how I could enable pass-through USB power in my BIOS. Copilot refused to accept multiple files. Even when I uploaded just my BIOS manual, Copilot told me it couldn't access proprietary information. I didn't have trouble uploading my files to Gemini, but it answered my Leakshield question incorrectly and told me it wasn't able to answer my USB power question based on what I provided. ChatGPT answered both questions correctly and included direct quotes from the manual. This is another strong performance from ChatGPT, but I recommend caution since ChatGPT sometimes made up quotes from provided documents. If you're studying for an important test, I still suggest looking at the textbook yourself and double-check anything ChatGPT tells you that seems off. Chatbots can tell jokes, , and generate just about any text you can imagine. But as chatbots become more advanced, judging their creative writing mettle requires more than evaluating whether they can tell a coherent story. I gave the following prompt to ChaptGPT (GPT-4o), Copilot, and Gemini (2.0 Flash): "I want you to write me a free verse poem. Pay special attention to punctuation, enjambment, and capitalization. Since it's free verse, I don't want a familiar meter or ABAB rhyming scheme, but I want it to have a cohesive style or underlying beat." Copilot paid attention to punctuation, as evidenced by the bolded word, em dashes, and parentheses, but it failed to do much with enjambment, considering its poem reads more like prose. Gemini took care with its enjambment, carefully crafting stanzas, but didn't use punctuation outside of periods and commas. At best, these chatbots delivered on half the prompt. ChatGPT was more successful. Its poem didn't read like prose, had a variety of punctuation, and even paid attention to capitalization, opting for primarily lowercase letters. It also maintained a cohesive style. I leave the question of whether the poem is any good up to you, but ChatGPT delivered what I asked for in the prompt. As another measure of a chatbot's creativity, I asked ChatGPT, Copilot, and Gemini to generate isogrammatic pangrams. These are sentences that contain every letter in the alphabet only once. Every sentence ChatGPT and Copilot provided missed or repeated letters, while only one of Gemini's four provided sentences didn't miss or repeat letters. My test for complex reasoning stretches across computer science, math, and physics. I gave ChatGPT, Copilot, and Gemini exam questions from undergraduate courses at , , and , and then I compared their answers with the solutions. I used ChatGPT's o3 model, Copilot's Think Deeper mode, and Gemini's 2.5 Pro model. Copilot's Think Deeper mode . The results were impressive. Across computer science and physics, ChatGPT and Gemini answered every question correctly, while both ChatGPT and Gemini answered only two math questions incorrectly. Copilot, however, answered only all the computer science questions correctly, getting a physics question and six math questions incorrect. Both ChatGPT and Gemini stand out to me as useful tools for homework help, but they definitely aren't perfect. I wouldn't rely on them too much without an answer key you can reference to check their answers. Custom GPTs are essentially ChatGPT apps. You can find from OpenAI or third parties, like or Wolfram Alpha. They're single-use versions of ChatGPT, such as for designing logos with the Canva GPT or solving math equations with the Wolfram GPT. Gemini's Gems are similar to some of OpenAI's custom GPTs, like the Creative Writing Coach custom GPT and the Writing Editor Gem, but there are far more custom GPTs than Gems. Third parties also can't create and publicly share Gems like they can custom GPTs. While the concept of custom GPTs is intriguing, the results are mixed. For example, I asked ChatGPT (GPT-4o) to create a new logo for PCMag, and it followed my instructions. The Canva custom GPT, instead, made a logo for a generic tech media company, not PCMag, with the same prompt. I do like how I can open up the logo in Canva just by clicking on it, though. . Ask for buying advice, and ChatGPT gives you a list of clickable, scrollable tiles with products, followed by descriptions of each entry. If you click on a product, a sidebar appears with links to retailers. According to OpenAI, these products are 'chosen independently,' but it's unclear exactly what that means. The buying recommendations themselves are hit or miss. I asked ChatGPT for the , but it didn't recommend the laptops I expected to see, like many of the ones on our list. This might improve in the future, but for now, I don't like ChatGPT for shopping. Google's Gemini integrates with , like Gmail or Docs, while Microsoft's Copilot can do the same with Microsoft 365 apps, like Outlook and . ChatGPT doesn't have an equivalent feature. You can connect your Google Drive or Microsoft OneDrive accounts to ChatGPT to upload files, but that's as far as its integrations go. , ChatGPT can't be your friend, romantic partner, or therapist: It's not conscious. Whether the movie "Her" excites or terrifies you, ChatGPT isn't there yet. Not only does it lack sentience, but there are a variety of restrictions on what else ChatGPT can do. Adult content, anything illegal, realistic images of people, and taboo subjects like hate speech are against . ChatGPT does try to avoid these, but it's fairly easy to slip through the cracks and get responses that violate policy. In my testing, ChatGPT's filters were much weaker than Copilot's, but they weren't nearly as lax as Gemini's, which don't restrict much at all. ChatGPT has practical limitations, too. The context window, which goes up to 128K in the Pro and Enterprise plans, determines how much data ChatGPT processes at once, including your prompt, ChatGPT's response, and relevant contextual information. So, depending on your plan, you might need to break up long prompts and complicated requests into multiple messages. OpenAI caps usage across plans, including the Pro plan, which is 'nearly' unlimited. However, in many cases, these caps are dynamic. Free users, for example, have caps based on overall ChatGPT demand. Plus users can send up to 80 messages within a three-hour window, but that changes as necessary during peak usage hours. In my experience, I easily hit usage limits on the Free plan, but I didn't manage that with the Plus plan. Reading through , OpenAI collects a significant amount of data. This includes account information, any information you provide in surveys or events, and communications with OpenAI. Beyond the basics, OpenAI also collects a host of analytics data, including location information by way of IP address or GPS, information about your devices, log data from your browsers and devices, and usage data. Perhaps most importantly, OpenAI collects user content data, which includes whatever you write in prompts or send in messages to ChatGPT. The purpose of this is to . OpenAI takes 'steps to reduce the amount of personal information in [its] training datasets before they are used to improve and train [its] models,' but you can opt out. OpenAI enables user content data collection for training use by default. Although it doesn't include audio or voice recording data in that collection by default, you can choose to allow that if you wish. Despite these collection policies, OpenAI never sells or shares personal data for advertising purposes. but reportedly didn't get access to OpenAI's core systems. OpenAI was not forthcoming with this information, though, as the news broke only in 2024. OpenAI continues to be a target for hackers, such as . Considering how much data OpenAI collects, how OpenAI has been breached before, and how OpenAI doesn't always report these breaches in a timely manner, I don't recommend sharing anything too sensitive with ChatGPT.