Latest news with #MelanieMitchell


Winnipeg Free Press
3 days ago
- Business
- Winnipeg Free Press
City's year-old short-term rental rules ‘tedious' for some property owners, too much for others
Just over a year has passed since the city introduced bylaws to license and tax short-term accommodation rentals, and while some property owners have adapted to the rules, others have abandoned the sector. 'It's cumbersome and tedious, but business as usual as far as I can tell,' Melanie Mitchell, president of the Manitoba Association of Short Term Rentals, said of the requirements. 'The regulations are tough to follow and we don't understand why some of them are put in place, but… the short-term rental hosts are complying and still being productive and profitable.' MIKAELA MACKENZIE / FREE PRESS FILES Community services committee chair Coun. Vivian Santos said the data is important for determining the scale and needs of Winnipeg's short-term rental sector. The move to regulate the sector was prompted, in part, by concerns that short-term rentals were taking up valuable housing stock. Some Winnipeg residents also feared such properties were becoming frequent sources of parties and criminal activity. Under the new regulations, introduced in April 2024, property owners can license only one short-term rental unit at their primary residence. Some owners were grandfathered into the regulations, allowing them to license one primary residence and up to three secondary residences, as long as they owned the properties before Feb. 23, 2023. Operators must now limit the period renters can stay in their properties to less than 30 days and pay a six per cent accommodation tax — among other requirements. A new annual report shows the licensing fees and taxes totalled just under $1.3 million as of March 31. After accounting for expenses, the changes brought about $700,000 into the city coffers. According to the report, the city issued 698 licences over the past year. Of that number, 341 were for primary residences, while 352 were for non-primary. Another five licences were granted for rental platforms, which advertise bookings and collect fees on behalf of the operator. Examples of such platforms include popular websites like Airbnb and Vrbo. As of April 1, 20 new short-term rental applications were under review, the report said. Mitchell said she hoped enforcement data included in the report would put safety concerns to rest. According to the data, the city received 165 complaint reports and collected $9,625 in fines over one year. City staff inspected 1,053 properties and reported an 86 per cent compliance rate with regulations, it said. They found 39 rentals operating without a licence, three advertising without an licence number and 25 not posting the owner's contact information or licence number at the rental, the report said. In total, 198 short-term rental applications were denied or withdrawn, and 19 others were cancelled by the applicant, the report said. 'Their own data shows that there were very few fines handed out, very few licences denied. And, I'm happy there were some licences denied because there was a black spot in our industry and hopefully that has been stamped out,' Mitchell said. Community services committee chair Coun. Vivian Santos said the report is critical for determining the scale and needs of Winnipeg's short-term rental sector. 'Now that we've brought this bylaw forward and we have these licences, it's good to see this data open and transparent,' said Santos (Point Douglas). The report will help inform a larger, ongoing study financed by the federal government, which committed nearly $800,000 to review short-term rental regulations in Winnipeg. Santos noted city council is open to amending the regulations, should they need fine-tuning. Mitchell said changes are necessary. She argued against the 30-day limit on short-term rental stays, calling it a 'nuisance.' The regulation makes it difficult to rent to people visiting Winnipeg for extended stays, or those who have been displaced from their homes for long periods due to emergencies, she said. Before the rules were enacted, many owners would prioritize mid-term bookings (beyond 30 days, but less than one year) and fill any occupancy gaps with short-term stays, she said. Former short-term rental owner Kevin Barske said he chose not to renew his licence in April out of frustration with the rule. Winnipeg Jets Game Days On Winnipeg Jets game days, hockey writers Mike McIntyre and Ken Wiebe send news, notes and quotes from the morning skate, as well as injury updates and lineup decisions. Arrives a few hours prior to puck drop. Without the ability to offer both short- and mid-term stays, his occupancy rate dropped to about 56 per cent over the last year, he said. 'My bread and butter was always business people and insurance claims that were coming into Winnipeg for longer than 30 days,' Barske said. 'My unit is occupied less, I'm making less money, plus the extra expenses of licensing and all the headaches for compliance — all that stuff — I just figured it wasn't worth the hassle.' The community services committee is set to review the annual report during a meeting Friday. Tyler SearleReporter Tyler Searle is a multimedia producer who writes for the Free Press's city desk. A graduate of Red River College Polytechnic's creative communications program, he wrote for the Stonewall Teulon Tribune, Selkirk Record and Express Weekly News before joining the paper in 2022. Read more about Tyler. Every piece of reporting Tyler produces is reviewed by an editing team before it is posted online or published in print — part of the Free Press's tradition, since 1872, of producing reliable independent journalism. Read more about Free Press's history and mandate, and learn how our newsroom operates. Our newsroom depends on a growing audience of readers to power our journalism. If you are not a paid reader, please consider becoming a subscriber. Our newsroom depends on its audience of readers to power our journalism. Thank you for your support.
Yahoo
09-05-2025
- Yahoo
3D printed gun, narcotics seized in Burke County operation
BURKE COUNTY, N.C. (QUEEN CITY NEWS) — Burke County Sheriff's Office executed an operation near Drexel Road and Summers Road in response to numerous citizen complaints regarding suspected drug trafficking and other criminal activity in the area. At approximately 11:30 p.m. on Wednesday, May 7, investigators observed a suspicious vehicle leaving a location known for illegal narcotics activity. A traffic stop was initiated, during which a North Carolina Department of Adult Corrections K-9 conducted an open-air sniff and alerted to the presence of potential contraband inside the vehicle. The driver, identified as Melanie Marie Mitchell, became combative and refused to comply with commands to exit the vehicle. Officers removed both Mitchell and the passenger, identified as Beverly Denise Freeman, and conducted a search of the vehicle. During the search, law enforcement discovered a fully functional, 3-D printed firearm in Mitchell's possession. The firearm had no serial number, rendering it untraceable. Officers also located a small cosmetics case with a digital scale, a clear plastic bag containing a crystalline substance, and a white powdery substance. Field testing confirmed the presence of methamphetamine and fentanyl. Mitchell, a convicted felon, was taken into custody on multiple charges, including possession of a firearm by a felon and various drug-related offenses. Freeman was found in possession of a pill bottle containing a clear plastic bag with a grayish powder and multiple prescription medications, including Adderall, a Schedule II controlled substance. Field testing confirmed the presence of fentanyl in the substance. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Yahoo
26-04-2025
- Science
- Yahoo
We Now Know How AI ‘Thinks'—and It's Barely Thinking at All
The big names in artificial intelligence—leaders at OpenAI, Anthropic, Google and others—still confidently predict that AI attaining human-level smarts is right around the corner. But the naysayers are growing in number and volume. AI, they say, just doesn't think like us. The work of these researchers suggests there's something fundamentally limiting about the underlying architecture of today's AI models. Today's AIs are able to simulate intelligence by, in essence, learning an enormous number of rules of thumb, which they selectively apply to all the information they encounter. He's the Anchor People Are Still Watching in a 'Not News-Friendly World' The Mistake You're Making in Today's Stock Market—Without Even Knowing It Cargo Shipments From China to U.S. Slide Toward a Standstill A Side Hustle for Friends of Musk: Selling Access to Stakes in His Private Companies FDA Asks Vaccine Maker to Complete New Clinical Trial for Delayed Covid-19 Shot This contrasts with the many ways that humans and even animals are able to reason about the world, and predict the future. We biological beings build 'world models' of how things work, which include cause and effect. Many AI engineers claim that their models, too, have built such world models inside their vast webs of artificial neurons, as evidenced by their ability to write fluent prose that indicates apparent reasoning. Recent advances in so-called 'reasoning models' have further convinced some observers that ChatGPT and others have already reached human-level ability, known in the industry as AGI, for artificial general intelligence. For much of their existence, ChatGPT and its rivals were mysterious black boxes. There was no visibility into how they produced the results they did, because they were trained rather than programmed, and the vast number of parameters that comprised their artificial 'brains' encoded information and logic in ways that were inscrutable to their creators. But researchers are developing new tools that allow them to look inside these models. The results leave many questioning the conclusion that they are anywhere close to AGI. 'There's a controversy about what these models are actually doing, and some of the anthropomorphic language that is used to describe them,' says Melanie Mitchell, a professor at the Santa Fe Institute who studies AI. New techniques for probing large language models—part of a growing field known as 'mechanistic interpretability'—show researchers the way these AIs do mathematics, learn to play games or navigate through environments. In a series of recent essays, Mitchell argued that a growing body of work shows that it seems possible models develop gigantic 'bags of heuristics,' rather than create more efficient mental models of situations and then reasoning through the tasks at hand. ('Heuristic' is a fancy word for a problem-solving shortcut.) When Keyon Vafa, an AI researcher at Harvard University, first heard the 'bag of heuristics' theory, 'I feel like it unlocked something for me,' he says. 'This is exactly the thing that we're trying to describe.' Vafa's own research was an effort to see what kind of mental map an AI builds when it's trained on millions of turn-by-turn directions like what you would see on Google Maps. Vafa and his colleagues used as source material Manhattan's dense network of streets and avenues. The result did not look anything like a street map of Manhattan. Close inspection revealed the AI had inferred all kinds of impossible maneuvers—routes that leapt over Central Park, or traveled diagonally for many blocks. Yet the resulting model managed to give usable turn-by-turn directions between any two points in the borough with 99% accuracy. Even though its topsy-turvy map would drive any motorist mad, the model had essentially learned separate rules for navigating in a multitude of situations, from every possible starting point, Vafa says. The vast 'brains' of AIs, paired with unprecedented processing power, allow them to learn how to solve problems in a messy way which would be impossible for a person. Other research looks at the peculiarities that arise when large language models try to do math, something they're historically bad at doing, but are getting better at. Some studies show that models learn a separate set of rules for multiplying numbers in a certain range—say, from 200 to 210—than they use for multiplying numbers in some other range. If you think that's a less than ideal way to do math, you're right. All of this work suggests that under the hood, today's AIs are overly complicated, patched-together Rube Goldberg machines full of ad-hoc solutions for answering our prompts. Understanding that these systems are long lists of cobbled-together rules of thumb could go a long way to explaining why they struggle when they're asked to do things even a little bit outside their training, says Vafa. When his team blocked just 1% of the virtual Manhattan's roads, forcing the AI to navigate around detours, its performance plummeted. This illustrates a big difference between today's AIs and people, he adds. A person might not be able to recite turn-by-turn directions around New York City with 99% accuracy, but they'd be mentally flexible enough to avoid a bit of roadwork. This research also suggests why many models are so massive: They have to memorize an endless list of rules of thumb, and can't compress that knowledge into a mental model like a person can. It might also help explain why they have to learn on such enormous amounts of data, where a person can pick something up after just a few trials: To derive all those individual rules of thumb, they have to see every possible combination of words, images, game-board positions and the like. And to really train them well, they need to see those combinations over and over. This research might also explain why AIs from different companies all seem to be 'thinking' the same way, and are even converging on the same level of performance—performance that might be plateauing. AI researchers have gotten ahead of themselves before. In 1970, Massachusetts Institute of Technology professor Marvin Minsky told Life magazine that a computer would have the intelligence of an average human being in 'three to eight years.' Last year, Elon Musk claimed that AI will exceed human intelligence by 2026. In February, Sam Altman wrote on his blog that 'systems that start to point to AGI are coming into view,' and that this moment in history represents 'the beginning of something for which it's hard not to say, 'This time it's different.'' On Tuesday, Anthropic's chief security officer warned that 'virtual employees' will be working in U.S. companies within a year. Even if these prognostications prove premature, AI is here to stay, and to change our lives. Software developers are only just figuring out how to use these undeniably impressive systems to help us all be more productive. And while their inherent smarts might be leveling off, work on refining them continues. Meanwhile, research into the limitations of how AI 'thinks' could be an important part of making them better. In a recent essay, MIT AI researcher Jacob Andreas wrote that better understanding of language models' challenges leads to new ways to train them: 'We can make LMs better (more accurate, more trustworthy, more controllable) as we start to address those limitations.' Write to Christopher Mims at American Consumers Serve Up Bleak Outlook on Economy How Chili's Won When America Raged About Fast-Food Prices IBM Is Back. Now It Must Prove Its Mettle in AI. Stocks Claw Back Most of April's Tariff Losses For Baby Strollers, There's No Way Around China Tariffs Sign in to access your portfolio