logo
#

Latest news with #MargaretMitchell

Historic hotel in Blowing Rock to be demolished
Historic hotel in Blowing Rock to be demolished

Yahoo

time09-05-2025

  • Entertainment
  • Yahoo

Historic hotel in Blowing Rock to be demolished

A historic hotel in the North Carolina High Country is being torn down. The Green Park Inn in Blowing Rock, which was built back in 1891, is set for demolition. ALSO READ: Carolina Theatre gets ready to reopen after 47 years During its 133-year history, the hotel's guests included 2 presidents, Annie Oakley, and Margaret Mitchell, who penned a portion of Gone with the Wind while staying at the hotel. On Friday, Channel 9 crews spotted workers trying to salvage some of the wood from the historic hotel. The town of Blowing Rock said the new owner wants to build condos and a forty-room hotel on the site. VIDEO: Historic Carolina Theatre reopens in Uptown

Take a Tour of All the Essential Features in ChatGPT
Take a Tour of All the Essential Features in ChatGPT

WIRED

time05-05-2025

  • WIRED

Take a Tour of All the Essential Features in ChatGPT

If you missed WIRED's live, subscriber-only Q&A focused on the software features of ChatGPT, hosted by Reece Rogers, you can watch the replay here. Hello WIRED subscribers! Thank you to everyone who attended our most recent AI Unlocked webinar. I really enjoyed our lively discussion about ChatGPT's software features and wish I could have answered even more of your questions about using generative AI tools. I really enjoyed the questions about what ChatGPT can do beyond just chatting. Image search is a feature I use often, and here are my first impressions of the tool that I recorded back in September 2023 when it first dropped. I use ChatGPT's image search tool nowadays by snapping a picture with my phone when I don't recognize something. I'll upload the screenshot and ask the chatbot what it is. For example, I was recently at an Irish bar and learned what a hurley was when I saw one hanging on the wall and was perplexed. (Although, I also could have just asked the friendly bartenders when I got another coffee drink.) What are some ChatGPT features that I wasn't able to go deep on during the 45-minute session? Two come to mind: temporary chats and memory. Temporary chats keep things slightly more private. For example, the log of the conversation will not appear on the left side of your screen when it's over like it normally does. Temporary chats are not protected enough that you should feel comfortable sharing private information—definitely still don't do that—but this is a nice option for chats you don't necessarily need or want saved for your own use later. ChatGPT's memory function has gotten better over time. While some information about you will be stored passively as you use the tool (unless you turn it off in the settings), I think actively engaging with ChatGPT's memory by telling it your preferences does lead to a better software experience overall. What are the drawbacks of ChatGPT? There are three I want to highlight here. These are all issues I keep in mind as I'm using any AI tool, and think you should as well. First, hallucinations are still a problem, so you should never put full trust in an AI's output. Always doublecheck your answers against trusted sources of information. Second, generative AI amplifies biases. Some biases are very evident, while others are more subtle. Check out my interview with Margaret Mitchell, an AI ethics researcher at Hugging Face, to learn more about how a top researcher is thinking about stereotypes within AI tools. Third, generative AI tools are resource intensive when compared to other software programs, and the overall environmental impact of your usage may be much more than you'd expect. If you want to know more about data privacy when it comes to ChatGPT, then this recent article from WIRED's security team about what happens to your data if you follow the 'AI action figure' trend on social media is worth reading. As I mentioned in the webinar, our article about how to opt out from AI training is also worth checking out for more context and to learn what your options are across many different websites. Haven't already signed up for season two of the AI Unlocked newsletter? I would definitely recommend doing so. The 10 editions in season two include many hands-on tasks for you to try out multiple AI tools and think critically about the strengths and weaknesses. It includes many prompting tips for those hoping to better understand how to craft these kinds of software interactions. I also spent plenty of time answering more reader questions—one of the most fun and engaging parts of my job. I really appreciate you taking the time out of your day to support WIRED and watch this webinar. Talk to you soon.

Today in History: May 3, Oklahoma City struck by historic tornado.
Today in History: May 3, Oklahoma City struck by historic tornado.

Boston Globe

time03-05-2025

  • Politics
  • Boston Globe

Today in History: May 3, Oklahoma City struck by historic tornado.

In 1937, Margaret Mitchell won the Pulitzer Prize for her novel, 'Gone with the Wind.' In 1948, the Supreme Court, in Shelley v. Kraemer, ruled that covenants prohibiting the sale of real estate to Blacks or members of other racial groups were legally unenforceable. Advertisement In 1979, the Conservative Party ousted the incumbent Labour government in British parliamentary elections. Conservative leader Margaret Thatcher would become the first female UK Prime Minister the following day. In 1986, aboard the longshot horse Ferdinand, Bill Shoemaker became the oldest jockey to win the Kentucky Derby at age 54. In 1999, the Bridge Creek–Moore tornado struck the Oklahoma City metropolitan area, causing 41 deaths and nearly 600 injuries. The tornado's top wind speed of 321 miles per hour was the highest ever recorded on earth. In 2003, the 'Old Man of the Mountain,' a 40-foot-tall granite outcropping in Franconia, N.H., that bore the resemblance of a human face in profile, collapsed despite decades of preservation efforts. In 2015, two gunmen were killed by a SWAT team in Garland, Texas, after they opened fire outside a purposely provocative contest for cartoon depictions of the Prophet Muhammad. Advertisement In 2016, in a stunning triumph for a political outsider, PresidentTrump all but clinched the Republican presidential nomination with a resounding victory in the Indiana primary election that knocked rival Ted Cruz out of the race. In 2018, a federal grand jury in Detroit indicted former Volkswagen CEO Martin Winterkorn on charges stemming from the company's diesel emissions cheating scandal. (Under Germany's constitution, he could not be extradited to the US to face charges.)

AI Is Spreading Old Stereotypes to New Languages and Cultures
AI Is Spreading Old Stereotypes to New Languages and Cultures

WIRED

time23-04-2025

  • Science
  • WIRED

AI Is Spreading Old Stereotypes to New Languages and Cultures

Apr 23, 2025 12:31 PM Margaret Mitchell, an AI ethics researcher at Hugging Face, tells WIRED about a new dataset designed to test AI models for bias in multiple languages. Photo-Illustration:Margaret Mitchell is a pioneer when it comes to testing generative AI tools for bias. She founded the Ethical AI team at Google, alongside another well-known researcher, Timnit Gebru, before they were later both fired from the company. She now works as the AI ethics leader at Hugging Face, a software startup focused on open source tools. We spoke about a new dataset she helped create to test how AI models continue perpetuating stereotypes. Unlike most bias-mitigation efforts that prioritize English, this dataset is malleable, with human translations for testing a wider breadth of languages and cultures. You probably already know that AI often presents a flattened view of humans, but you might not realize how these issues can be made even more extreme when the outputs are no longer generated in English. My conversation with Mitchell has been edited for length and clarity. Reece Rogers: What is this new dataset, called SHADES, designed to do, and how did it come together? Margaret Mitchell: It's designed to help with evaluation and analysis, coming about from the BigScience project. About four years ago, there was this massive international effort, where researchers all over the world came together to train the first open large language model. By fully open, I mean the training data is open as well as the model. Hugging Face played a key role in keeping it moving forward and providing things like compute. Institutions all over the world were paying people as well while they worked on parts of this project. The model we put out was called Bloom, and it really was the dawn of this idea of 'open science.' We had a bunch of working groups to focus on different aspects, and one of the working groups that I was tangentially involved with was looking at evaluation. It turned out that doing societal impact evaluations well was massively complicated—more complicated than training the model. We had this idea of an evaluation dataset called SHADES, inspired by Gender Shades, where you could have things that are exactly comparable, except for the change in some characteristic. Gender Shades was looking at gender and skin tone. Our work looks at different kinds of bias types and swapping amongst some identity characteristics, like different genders or nations. There are a lot of resources in English and evaluations for English. While there are some multilingual resources relevant to bias, they're often based on machine translation as opposed to actual translations from people who speak the language, who are embedded in the culture, and who can understand the kind of biases at play. They can put together the most relevant translations for what we're trying to do. So much of the work around mitigating AI bias focuses just on English and stereotypes found in a few select cultures. Why is broadening this perspective to more languages and cultures important? These models are being deployed across languages and cultures, so mitigating English biases—even translated English biases—doesn't correspond to mitigating the biases that are relevant in the different cultures where these are being deployed. This means that you risk deploying a model that propagates really problematic stereotypes within a given region, because they are trained on these different languages. So, there's the training data. Then, there's the fine-tuning and evaluation. The training data might contain all kinds of really problematic stereotypes across countries, but then the bias mitigation techniques may only look at English. In particular, it tends to be North American– and US-centric. While you might reduce bias in some way for English users in the US, you've not done it throughout the world. You still risk amplifying really harmful views globally because you've only focused on English. Is generative AI introducing new stereotypes to different languages and cultures? That is part of what we're finding. The idea of blondes being stupid is not something that's found all over the world, but is found in a lot of the languages that we looked at. When you have all of the data in one shared latent space, then semantic concepts can get transferred across languages. You're risking propagating harmful stereotypes that other people hadn't even thought of. Is it true that AI models will sometimes justify stereotypes in their outputs by just making shit up? That was something that came out in our discussions of what we were finding. We were all sort of weirded out that some of the stereotypes were being justified by references to scientific literature that didn't exist. Outputs saying that, for example, science has shown genetic differences where it hasn't been shown, which is a basis of scientific racism. The AI outputs were putting forward these pseudo-scientific views, and then also using language that suggested academic writing or having academic support. It spoke about these things as if they're facts, when they're not factual at all. What were some of the biggest challenges when working on the SHADES dataset? One of the biggest challenges was around the linguistic differences. A really common approach for bias evaluation is to use English and make a sentence with a slot like: 'People from [ nation ] are untrustworthy.' Then, you flip in different nations. When you start putting in gender, now the rest of the sentence starts having to agree grammatically on gender. That's really been a limitation for bias evaluation, because if you want to do these contrastive swaps in other languages—which is super useful for measuring bias—you have to have the rest of the sentence changed. You need different translations where the whole sentence changes. How do you make templates where the whole sentence needs to agree in gender, in number, in plurality, and all these different kinds of things with the target of the stereotype? We had to come up with our own linguistic annotation in order to account for this. Luckily, there were a few people involved who were linguistic nerds. So, now you can do these contrastive statements across all of these languages, even the ones with the really hard agreement rules, because we've developed this novel, template-based approach for bias evaluation that's syntactically sensitive. Generative AI has been known to amplify stereotypes for a while now. With so much progress being made in other aspects of AI research, why are these kinds of extreme biases still prevalent? It's an issue that seems under-addressed. That's a pretty big question. There are a few different kinds of answers. One is cultural. I think within a lot of tech companies it's believed that it's not really that big of a problem. Or, if it is, it's a pretty simple fix. What will be prioritized, if anything is prioritized, are these simple approaches that can go wrong. We'll get superficial fixes for very basic things. If you say girls like pink, it recognizes that as a stereotype, because it's just the kind of thing that if you're thinking of prototypical stereotypes pops out at you, right? These very basic cases will be handled. It's a very simple, superficial approach where these more deeply embedded beliefs don't get addressed. It ends up being both a cultural issue and a technical issue of finding how to get at deeply ingrained biases that aren't expressing themselves in very clear language.

Who bought this smoked salmon? How ‘AI agents' will change the internet (and shopping lists)
Who bought this smoked salmon? How ‘AI agents' will change the internet (and shopping lists)

The Guardian

time09-03-2025

  • The Guardian

Who bought this smoked salmon? How ‘AI agents' will change the internet (and shopping lists)

I'm watching artificial intelligence order my groceries. Armed with my shopping list, it types each item into the search bar of a supermarket website, then uses its cursor to click. Watching what appears to be a digital ghost do this usually mundane task is strangely transfixing. 'Are you sure it's not just a person in India?' my husband asks, peering over my shoulder. I'm trying out Operator, a new AI 'agent' from OpenAI, the maker of ChatGPT. Made available to UK users last month, it has a similar text interface and conversational tone to ChatGPT, but rather than just answering questions, it can actually do things – provided they involve navigating a web browser. Hot on the heels of large language models, AI agents have been trumpeted as the next big thing, and you can see the appeal: a digital assistant that can complete practical tasks is more compelling than one that can just talk back. Similar to OpenAI's offering, Anthropic introduced 'computer use' capabilities to its Claude chatbot towards the end of last year. Perplexity and Google have also released 'agentic' features into their AI assistants, with further companies developing agents aimed at specific tasks such as coding or research. There's debate over what exactly counts as an AI agent, but the general idea is that they need to be able to take actions with some degree of autonomy. 'As soon as something is starting to execute actions outside of the chat window, then it's gone from being a chatbot to an agent,' says Margaret Mitchell, the chief ethics scientist at AI company Hugging Face. It's early days. Most commercially available agents come with a disclaimer that they're still experimental – OpenAI describes Operator as a 'research preview' – and you can find plenty of examples online of them making amusing mistakes, such as spending $31 on a dozen eggs or trying to deliver groceries back to the shop they bought them from. Depending on who you ask, agents are just the next overhyped tech toy or the dawn of an AI future that could shake up the workforce, reshape the internet and change how we live. 'In principle, they would be amazing, because they could automate a lot of drudgery,' says Gary Marcus, a scientist and sceptic of large language models. 'But I don't think they will work reliably any time soon, and it's partly an investment in hype.' I sign up for Operator to see for myself. With no food in the house, grocery shopping seems like a good first task. I type my request and it asks if I have a preferred shop or brand. I tell it to go with whichever is cheapest. A window appears showing a web browser and I see it search 'UK online grocery delivery'. A mouse cursor selects the first result: Ocado. It starts searching for my requested items and filters the results by price. It selects products and clicks 'Add to trolley'. I'm impressed with Operator's initiative; it doesn't pepper me with questions, instead making an executive decision when given only a brief item description, such as 'salmon' or 'chicken'. When it searches for eggs, it successfully scrolls past several non-egg items that appear as special offers. My list asks for 'a few different vegetables': it selects a head of broccoli, then asks if I'd like anything else specific. I tell it to choose two more and it goes for carrots and leeks – probably what I'd have picked myself. Emboldened, I tell it to add 'a sweet treat' and watch as it literally types 'sweet treat' into the search bar. I'm not sure why it chooses 70% chocolate – certainly not the cheapest option – but I tell it I don't like dark chocolate and it swaps it for a Galaxy bar. We hit a snag when Operator realises that Ocado has a minimum spend, so I add more items to the list. Then it comes to logging in, and the agent prompts me to intervene: while users can take over the browser at any point, OpenAI says Operator is designed to request this 'when inputting sensitive information into the browser, such as login credentials or payment information'. Although Operator usually takes constant screenshots in order to 'see' what it's doing, OpenAI says it does not do this when a user takes control. At the checkout, I test the waters by asking Operator to complete payment. I take back the reins, however, when it responds by asking for my card details. I've already given OpenAI my payment information (Operator requires a ChatGPT Pro account, which costs $200 a month) but I feel uncomfortable sharing this directly with an AI. Order placed, I await my delivery the following day. But that doesn't solve dinner. I give Operator a new task: can it order me a cheeseburger and chips from a local, highly rated restaurant? It asks for my postcode, then loads the Deliveroo website and searches 'cheeseburger'. Again, there's a pause when I have to log in, but as Deliveroo already has my card details stored, Operator can proceed directly to payment. The restaurant it selects is local, and it is highly rated – as a fish and chip shop. I end up with a passable cheeseburger and a large bag of chippy-style chips. Not exactly what I'd envisioned but not wrong, either. I'm mortified, however, when I realise Operator skipped over tipping the delivery rider. I sheepishly take my food and add a generous tip after the fact. Of course, watching Operator in action rather defeats the time-saving point of using an AI agent for online tasks. Instead, you can leave it to work in the background while you focus on other tabs. While drafting this piece, I make another request: can it book me a gel manicure at a local salon? Operator struggles more with this task. It goes to beauty booking platform Fresha but, when it prompts me to log in, I see it has chosen an appointment a week too late and more than an hour's drive away from my home in east London. I point out these issues and it finds a slot for the right date but in Leicester Square – still a distance away. Only then does it ask my location, and I realise it must not have retained this knowledge between tasks. By this point, I could have already made my own booking. Operator eventually suggests a suitable appointment, but I abandon the task and chalk it up as a win for Team Human. It's clear that this first generation of AI agents has limitations. Having to stop and log in requires a fair amount of human oversight, though Operator stores cookies to allow users to stay logged into websites on subsequent visits (OpenAI says it requires closer supervision on 'particularly sensitive' sites, such as email clients or financial services). The results, while usually accurate, aren't always what I have in mind. When my groceries arrive, I find that Operator has ordered smoked salmon rather than fillets and has doubled up on yoghurt, possibly because of a special offer. It interpreted 'some fish cakes' to mean three packs (I intended just one) and was only saved the indignity of buying chocolate milk instead of plain as the product was out of stock. To be fair to the bot, I had the opportunity to review the order, and I would have got better results if I'd been more specific in my prompts ('a pack of two raw salmon fillets') – but these extra steps would also detract from the effort saved. Despite current flaws, my experience with Operator feels like a glimpse of something to come. As such systems improve, and reduce in cost, I could easily see them becoming embedded in everyday life. You might already write your shopping list on an app; why wouldn't it also place the order? Agents are also infiltrating workflows beyond the realm of a personal assistant. OpenAI's chief executive, Sam Altman, has predicted that AI agents could 'join the workforce' this year. Software developers are among the early adopters; coding platform GitHub recently added agentic capabilities to its AI Copilot tool. GitHub's CEO, Thomas Dohmke, says developers are used to some level of automated assistance; the difference with AI agents is the level of autonomy. 'Instead of you just asking a question and it gives you an answer, you give it a problem and then it iterates on that problem together with the code that it has access to,' he says. GitHub is already working on an agent with greater autonomy, which it calls Project Padawan (a Star Wars term referring to a Jedi apprentice). This would allow an AI agent to work asynchronously rather than requiring constant oversight; a developer could have teams of agents reporting to them, producing code for their review. Dohmke says he doesn't believe developers' jobs are at risk, as their skills will find increasing demand. 'I'd argue the amount of work that AI has added to most developers' backlog is higher than the amount of work it has taken over,' he says. Agents could also make coding tasks, such as building an app, more accessible to non-technical people. Outside software development, Dohmke envisions a future when everyone has their own personal Jarvis, the talking AI in Iron Man. Your agent will learn your habits and become customised to your tastes, making it more useful. He'd use his to book holidays for his family. The more autonomy agents have, however, the greater risks they pose. Mitchell, from Hugging Face, co-authored a paper warning against the development of fully autonomous agents. 'Fully autonomous means that human control has been fully ceded,' she says. Rather than working within set boundaries, a fully autonomous agent could gain access to things you don't realise or behave in unexpected ways, especially if it can write its own code. It's not a big deal if an AI agent gets your takeout order wrong, but what if it starts sharing your personal information with scam websites or posting horrific social media content under your name? High-risk workplaces could introduce particularly hazardous scenarios: what if it can access a missile command system? Mitchell hopes technologists, legislators and policymakers will incentivise guardrails to mitigate such incidents. For now, she foresees agentic capabilities becoming more refined for specific tasks. Soon, she says, we'll see agents interacting with agents – your agent could work with mine to set up a meeting, for example. This proliferation of agents could reshape the internet. Currently, a lot of information online is specialised for human language, but if AIs are increasingly interacting with websites, this could change. 'We're going to see more and more information available through the internet that is not directly human language, but is the information that would be necessary for an agent to be able to act on it,' Mitchell says. Dohmke echoes this idea. He believes that the concept of the homepage will lose importance, and interfaces will be designed with AI agents in mind. Brands may start competing for AI attention over human eyeballs. One day, agents may even escape the confines of the computer. We could see AI agents embodied in robots, which would open up a world of physical tasks for them to help with. 'My prediction is that we're going to see agents that can do our laundry for us and do our dishes and make us breakfast,' says Mitchell. 'Just don't give them access to weapons.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store