
A cult with cars? I'm inside this automotive clubhouse, soaking up a cool vibe
Tucked neatly behind a series of industrial complexes, studios and garages in Leaside, Toronto, lies RCLUB, a mysterious and somewhat misunderstood automotive social club. Some say their memberships cost upwards of $15,000. Others say it's an exclusive members club only for the fortunate few who own one of the world's great cars.
The truth?
It's a hub for all who love anything to do with automotive culture, and it provides its members a community to share in their passions.
Since its creation in 2019, RCLUB (as in Our Club) has grown and evolved, while still staying true to its roots. Built upon the vision of founder, Adam Westland, to offer a luxury car-sharing platform, the club provided members with the opportunity to borrow high-end cars whenever they pleased. (Westland has a background in IT infrastructure design for hotel chains.)
The model has since changed and the members have stayed. The shift in gear from supercar borrowing to automotive social club really started to ramp up over the past six months through the partnership of Nick Cassells, 42, and his wife, Whitney Bloom, 37, as they joined up with Westland. 'Today, the club has really evolved into this large community of auto enthusiasts' says Bloom. 'I think that's what's so great about the community at RCLUB, it's just really inclusive. It's not just a car club; it's a social club.'
Now the club screams family and community, almost as loudly as the V10s of the Lamborghinis parked out back roar when they arrive.
'We both have full time jobs. but we are passionate about the RCLUB community and while it's busy with a three-year-old, a nine-month old and a dog), we are dedicated to keeping our unique car culture alive … even if it means stretching the boundaries of time,' says Bloom, who works for Meta.
Cassells spent a decade at Labatt, where he learned how to create an atmosphere to make people feel welcomed. He attended culinary school at George Brown. 'Nick is always working to ensure the clubhouse, like his kitchen, is clean and fuelled with good energy,' says Bloom.
The injection of Cassells' hospitality expertise invigorated the club. Even with its hidden location, and some confusion surrounding it since its change of direction, the 'exclusive' members car club has managed to grow to become the largest individually held and locally owned one in Canada.
I went along to take in the Miami Formula One viewing party to sample the vibe. Some of the cars parked outside diverted partygoers' attention. A trio of Lamborghinis, in particular, caught the eye. I said I'd like to film some rolling footage of the Lamborghini squad. A club member jumped into action and rallied the three owners for a drive. He even hopped in at the wheel of my parents' Honda Fit, so I would be free to capture the moment through my lens as we took over the Bayview extension for a Sunday afternoon drive.
It was generous of them to do this for a young photographer and highlights the large heart of this automotive family. I won't dwell on the features of the club: the mechanic bays, car storage, cars to swap and test, SIM racing, live events and discounts on parts and tires.
What truly makes this club stand out is the inclusive vibe its members have built. Perhaps a line from the 'Fast & Furious' franchise will capture it best: 'Everyone becomes family. It's like a cult with cars.' — Aimes in 'Fast X.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Axios
31 minutes ago
- Axios
Data center boom may end up being "irrational," investor warns
A rush by large tech players to build so many data centers for AI may end up being "irrational," and there's particular risk in building small nuclear reactors to run them, a prominent tech investor warned Wednesday. Why it matters: Big Tech has been investing billions of dollar into data centers and energy sources to power them. Just this week, Meta announced a deal to buy the power from an operating traditional nuclear station in Illinois that was set to retire in 2027. Zoom in: Speaking at Axios' AI Summit in New York, Lux Capital co-founder and partner Josh Wolfe compared the build-out of data center infrastructure to previous bubbles in fiber-optic networking and cloud computing. "I think that you're going to have the same phenomenon now," said Wolfe, whose firm Lux backs deeptech and science startups across sectors like AI, defense, and biotech. What any one individual hyperscaler is doing to build out infrastructure is rational, but "collectively becomes irrational," said Wolfe. "It will not necessarily persist." The intrigue: Wolfe raised a flag specifically on the build-out of the power infrastructure for these data centers. "One take that is related to that is the demands for energy, which is presumed that, because you need all these data centers, then you need small modular reactors, and so you're getting speculative capital that's going into the energy provision therein," Wolfe said. "So I think that that whole thing is going to end in disaster, mostly because as cliched as it is, history doesn't repeat. It rhymes."


The Verge
an hour ago
- The Verge
Here's what's inside Meta's experimental new smart glasses
Meta has revealed more information about Aria Gen 2, its experimental smart glasses designed to serve as a test platform for research about augmented reality, AI, and robotics. The glasses pack several improvements into their lightweight frame that could one day translate into consumer products, including an improved eye-tracking system that can track gaze per eye, detect blinks, and estimate the center of pupils. 'These advanced signals enable a deeper understanding of the wearer's visual attention and intentions, unlocking new possibilities for human-computer interaction,' Meta writes. Meta initially announced Aria Gen 2 in February, saying they will 'pave the way for future innovations that will shape the next computing platform.' They build upon Meta's first iteration of the glasses in 2020, which were similarly available for researchers only. Along with an improved eye-tracking system, Aria Gen 2 comes with four computer vision cameras that Meta says enable 3D hand and object tracking. Meta says researchers can use this information to enable highly precise tasks like 'dexterous robot hand manipulation.' The glasses also have a photoplethysmography sensor built into the nosepad, which allows the device to estimate a wearer's heart rate, along with a contact microphone that Meta says provides better audio in loud environments. There's a new ambient light sensor as well, allowing the glasses to differentiate between indoor and outdoor lighting. The Aria Gen 2 glasses include folding arms for the first time, weigh around 75 grams, and come in eight different sizes. Meta plans on opening applications for researchers to work with Aria Gen 2 later this year. The initiative builds on the successful development of Meta's Ray-Ban smart glasses, a form factor it aims to expand with its Orion augmented-reality glasses, a rumored partnership with Oakley, and a high-end pair of 'Hypernova' glasses with a built-in screen.
Yahoo
an hour ago
- Yahoo
If You Thought Facebook Was Toxic Already, Now It's Replacing Its Human Moderators with AI
Few companies in the history of capitalism have amassed as much wealth and influence as Meta. A global superpower in the information space, Meta — the parent company of Facebook, Instagram, WhatsApp, and Threads — has a market cap of $1.68 trillion at the time of writing, which for a rough sense of scale is more than the gross domestic product of Spain. In spite of its immense influence, none of its internal algorithms can be scrutinized by public watchdogs. Its host country, the United States, has largely turned a blind eye to its dealings in exchange for free use of Meta's vast surveillance capabilities. That lack of oversight coupled with Meta's near-omnipresence as a social utility has had devastating consequences throughout the world, manifesting in crises like the genocide of Muslims in Myanmar, or the systemic suppression of Palestinian rights organizations. How do you uncover the harms caused by one of the most powerful companies on earth? In the case of public violence, the evidence isn't hard to trace. However, Meta's unprecedented corporate dynasty also creates less obvious harms, which scores of scholars, researchers, and journalists are devoting entire careers to uncovering. One prominent group of said investigators is GLAAD, the Gay & Lesbian Alliance Against Defamation, which recently released its annual report on social media safety, privacy, and expression for LGBTQ people. The report notes that Meta has undergone a "particularly extreme" ideological shift over the past year, adding harmful exceptions to its content moderation policies while disproportionately suppressing LGBTQ users and their content. The tech giant has also failed to give LGBTQ users sovereignty over their own personal data, which it collects, analyzes, and wields to generate huge profits. While Meta collects all of our data — from which it draws over 95 percent of its revenue — the practice is particularly harmful to LGBTQ users, who then have to contend with algorithmic biases, non-consensual outing, harassment, and in some countries state oppression. "It's a dangerous time, certainly for trans people, who as a minority have been so ridiculously maligned, but also a dangerous time for gay people, openly bi[sexual] people, people who are different in any way," says Sarah Roberts, a UCLA professor and Director of the Center for Critical Internet Inquiry. To address these shortcomings and the dangers they introduce, GLAAD made a number of recommendations. One key suggestion was to improve moderation "by providing training for all content moderators focused on LGBTQ safety, privacy and expression." The media advocacy group doesn't mince words, adding that "AI systems should be used to flag for human review, not for automated removals." However, it doesn't look like Meta got the message. Weeks after GLAAD issued its findings, internal Meta documents leaked to NPR revealed the company's plan to hand 90 percent of its privacy and integrity reviews over to "artificial intelligence." This will impact nearly every new feature introduced to its platforms, where human moderators would typically evaluate new features for risks to privacy and safety, and the wellbeing of user groups like minors, immigrants, and LGBTQ people. Meta's internal risk assessment is an already opaque process, and Roberts notes that government attempts at risk oversight, like the EU's Digital Services Act, are likewise a labyrinth of filings which are largely dictated by the social media companies themselves. AI, chock full of biases and prone to errors — as admitted by Meta's own AI chief — is certain to make the situation even worse. Earlier this week, meanwhile, the Wall Street Journal revealed Meta's plans to fully automate advertising via the company's generative AI software, which will allow advertisers to "fully create and target ads" directly, with no human in the loop. This includes hyper-personalized ads, writes the WSJ, "so that users see different versions of the same ad in real time, based on factors such as geolocation." Data hoarders like Meta — which track you even when you're not using its platforms — have long been able to profile LGBTQ users based on gender identify and sexual orientation, including those who aren't publicly out. Removing any human from these already sinister practices serves to streamline operations and distance Meta from its own actions — "we didn't out gay users living under an oppressive government," the company can say, "even if our AI did." It's no coincidence that Meta had already disbanded its "Responsible AI" team as early as 2023. At the root of these decisions — Meta CEO Mark Zuckerberg's right wing turn notwithstanding — is the calculated drive to maximize revenue. "If there's no reason to rigorously moderate harmful content, then why pay so many content moderators? Why engage researchers to look into the circulation of this kind of content?" observes Roberts. "There ends up being a real cost savings there." "One of the things I've always said is that content moderation of social media is not primarily about protecting people, it's about brand management," she told Futurism. "It's about the platform managing its brand in order to make the most hospitable environment for advertisers." Sometimes these corporate priorities line up with progressive causes, like LGBTQ user safety or voter registration. But when they don't, Roberts notes, "dollars are dollars." "We are looking at multibillion-dollar companies, the most capitalized companies in the world, who have operated with impunity for many, many years," she said. "How do you convince them that they should care, when other powerful sectors are telling them the opposite?" More on Meta: Meta's Platforms Have Become a Cesspool of Hatred Against Queer People