logo
#

Latest news with #TheOptimist

Hey ChatGPT, which one of these versions truly is the real Sam Altman?
Hey ChatGPT, which one of these versions truly is the real Sam Altman?

Business Standard

time25-05-2025

  • Business
  • Business Standard

Hey ChatGPT, which one of these versions truly is the real Sam Altman?

If the aim is not, in the first place, to help the world, but instead to get bigger - better chips, more data, smarter code - then our problems might just get bigger too NYT By Tim Wu EMPIRE OF AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao Published by Penguin Press 482 pages $32 THE OPTIMIST: Sam Altman, OpenAI, and the Race to Invent the Future by Keach Hagey Published by 367 pages $31.99 The 'paper clip problem' is a well??'known ethics thought experiment. It imagines a superintelligent AI charged with the seemingly harmless goal of making as many paper clips as possible. Trouble is, as the philosopher Nick Bostrom put it in 2003, without common-sense limits it might transform 'first all of earth and then increasing portions of space into paper clip manufacturing facilities.' The tale has long served as a warning about objectives pursued too literally. Two new books that orbit the entrepreneur Sam Altman and the firm he co-founded, OpenAI, suggest we may already be living with a version of the problem. In Empire of AI, journalist Karen Hao argues that the pursuit of an artificial superintelligence has become its own figurative paper clip factory, devouring too much energy, minerals and human labour. The Optimist, by the Wall Street Journal reporter Keach Hagey, leaves readers suspecting that the earnest and seemingly innocuous paper clip maker who ends up running the world for his own ends could be Altman himself. Hao portrays OpenAI and other companies that make up the fast??'growing AI sector as a 'modern-day colonial world order.' Much like the European powers of the 18th and 19th centuries, they 'seize and extract precious resources to feed their vision of artificial intelligence.' In a corrective to tech journalism that rarely leaves Silicon Valley, Hao ranges well beyond the Bay Area with extensive fieldwork in Kenya, Colombia and Chile. The Optimist is concentrated on Altman's life and times. Born in Chicago to progressive parents named Connie and Jerry, Altman was heavily influenced by their do-gooder spirit. His relentlessly upbeat manner and genuine technical skill made him a perfect fit for Silicon Valley. The arc of Altman's life also follows a classic script. He drops out of Stanford to launch a start??'up that fizzles, but the effort brings him to the attention of Paul Graham, the co-founder of Y Combinator, an influential tech incubator that launched companies like Airbnb and Dropbox. By age 28, Altman has risen to succeed Graham as the organisation's president, setting the stage for his leadership in the AI revolution. As Hagey makes clear, success in this context is all about the way you use the people you know. During the 2010s Altman joined a group of Silicon Valley investors determined to recover the grand ambitions of earlier tech eras. They sought to return to outer space, unlock nuclear fusion, achieve human-level AI and even defeat death itself. The investor Peter Thiel was a major influence, but Altman's most important collaborator was Elon Musk. The early??' 2010s Musk who appears in both books is almost unrecognisable to observers who now associate him with black MAGA hats and chain-saw antics. This Musk, the builder of Tesla and SpaceX, believes that creating superintelligent computer systems is 'summoning the demon.' He becomes obsessed with the idea that Google will soon develop a true artificial intelligence and allow it to become a force for evil. Altman mirrors his anxieties and persuades him to bankroll a more idealistic rival. He pitched a 'Manhattan Project for AI,' a nonprofit to develop a good AI in order to save humanity from its evil twin. Musk guaranteed $1 billion and even supplied the name OpenAI. Hagey's book, written with Altman's cooperation, is no hagiography. The Optimist lets the reader see how thoroughly Altman outfoxed his patron. It's striking that, despite providing much of the initial capital and credibility, Musk ends up with almost nothing to show for his investment. Hao's 2020 profile of OpenAI, published in the MIT Technology Review, was unflattering and the company declined to cooperate with her for her book. She wants to make its negative spillover effects evident. Hao does an admirable job of telling the stories of workers in Nairobi who earn 'starvation wages to filter out violence and hate speech' from ChatGPT, and of visits to communities in Chile where data centres siphon prodigious amounts of water and electricity to run complex hardware. Altman recently told the statistician Nate Silver that if we achieve human-level AI, 'poverty really does just end.' But motives matter. The efficiencies of the cotton gin saved on labour but made slavery even more lucrative. If the aim is not, in the first place, to help the world, but instead to get bigger — better chips, more data, smarter code — then our problems might just get bigger too. Note: The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement regarding news content related to AI systems. OpenAI and Microsoft have denied those claims. The reviewer is a law professor at Columbia University ©2025 The New York Times News Service

How Peter Thiel's Relationship With Eliezer Yudkowsky Launched the AI Revolution
How Peter Thiel's Relationship With Eliezer Yudkowsky Launched the AI Revolution

WIRED

time20-05-2025

  • Business
  • WIRED

How Peter Thiel's Relationship With Eliezer Yudkowsky Launched the AI Revolution

May 20, 2025 7:00 AM The AI doomer and the AI boomer both created each other's monsters. An excerpt from The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future. In a new book about Sam Altman, journalist Keach Hagey maps the connections between the minds driving the AI boom. Photo-illustration: Jacqui VanLiew; Getty Images It would be hard to overstate the impact that Peter Thiel has had on the career of Sam Altman. After Altman sold his first startup in 2012, Thiel bankrolled his first venture fund, Hydrazine Capital. Thiel saw Altman as an inveterate optimist who stood at 'the absolute epicenter, maybe not of Silicon Valley, but of a Silicon Valley zeitgeist.' As Thiel put it, 'If you had to look for the one person who represented a millennial tech person, it would be Altman.' Each year, Altman would point Thiel toward the most promising startup at Y Combinator–Airbnb in 2012, Stripe in 2013, Zenefits in 2014–and Thiel would swallow hard and invest, even though he sometimes felt like he was being swept up in a hype cycle. Following Altman's advice brought Thiel's Founders Fund some immense returns. Thiel, meanwhile, became the loudest voice critiquing the lack of true technological progress amidst all the hype. 'Forget flying cars,' he quipped during a 2012 Stanford lecture. 'We're still sitting in traffic.' By the time Altman took over Y Combinator in 2014, he had internalized Thiel's critique of 'tech stagnation' and channeled it to remake YC as an investor in 'hard tech' moonshots like nuclear energy, supersonic planes—and artificial intelligence. Now it was Altman who was increasingly taking his cues from Thiel. And if it's hard to exaggerate Thiel's effect on Altman, it's similarly easy to understate the influence that an AI-obsessed autodidact named Eliezer Yudkowsky had on Thiel's early investments in AI. Though he has since become perhaps the world's foremost AI doomsday prophet, Yudkowsky started out as a magnetic, techno-optimistic wunderkind who excelled at rallying investors, researchers, and eccentrics around a quest to 'accelerate the singularity.' In this excerpt from the forthcoming book The Optimist, Keach Hagey describes how Thiel's relationship with Yudkowsky set the stage for the generative AI revolution: How it was Yudkowsky who first inspired one of the founders of DeepMind to imagine and build a 'superintelligence,' and Yudkowsky who introduced the founders of DeepMind to Thiel, one of their first investors. How Thiel's conversations with Altman about DeepMind would help inspire the creation of OpenAI. And how Thiel, as one of Yudkowsky's most important backers, inadvertently seeded the AI-apocalyptic subcultures that would ultimately play a role in Sam Altman's ouster, years later, as CEO of OpenAI. Like Sam Altman, Peter Thiel had long been obsessed with the possibility that one day computers would become smarter than humans and unleash a self-­reinforcing cycle of exponential technological progress, an old science fiction trope often referred to as 'the singularity.' The term was first introduced by the mathematician and Manhattan Project adviser John von Neumann in the 1950s, and popularized by the acclaimed sci-­fi author Vernor Vinge in the 1980s. Vinge's friend Marc Stiegler, who worked on cybersecurity for the likes of Darpa while drafting futuristic novels, recalled once spending an afternoon with Vinge at a restaurant outside a sci-­fi convention 'swapping stories we would never write because they were both horrific and quite possible. We were too afraid some nutjob would pick one of them up and actually do it.' Among the many other people influenced by Vinge's fiction was Eliezer Yudkowsky. Born into an Orthodox Jewish family in 1979 in Chicago, Yudkowsky was son of a psychiatrist mother and a physicist father who went on to work at Bell Labs and Intel on speech recognition, and was himself a devoted sci-­fi fan. Yudkowsky began reading science fiction at age 7 and writing it at age 9. At 11, he scored a 1410 on the SAT. By seventh grade, he told his parents he could no longer tolerate school. He did not attend high school. By the time he was 17, he was painfully aware that he was not like other people, posting a web page declaring that he was a 'genius' but 'not a Nazi.' He rejected being defined as a 'male teenager,' instead preferring to classify himself as an 'Algernon,' a reference to the famous Daniel Keyes short story about a lab mouse who gains enhanced intelligence. Thanks to Vinge, he had discovered the meaning of life. 'The sole purpose of this page, the sole purpose of this site, the sole purpose of anything I ever do as an Algernon is to accelerate the Singularity,' he wrote. Around this time, Yudkowsky discovered an obscure mailing list of a society calling itself the Extropians, which was the subject of a 1994 article in Wired that happened to include their email address at the end. Founded by philosopher Max More in the 1980s, Extropianism is a form of pro-­science super-­optimism that seeks to fight entropy—­the universal law that says things fall apart, everything tends toward chaos and death—­on all fronts. In practical terms, this meant signing up to have their bodies—­or at least heads—­frozen at negative 321 degrees Fahrenheit at the Alcor Life Extension Foundation in Scottsdale, Arizona, after they died. They would be revived once humanity was technologically advanced enough to do so. More philosophically, fighting entropy meant abiding by five principles: Boundless Expansion, Self-­Transformation, Dynamic Optimism, Intelligent Technology, and Spontaneous Order. (Dynamic Optimism, for example, involved a technique called selective focus, in which you'd concentrate on only the positive aspects of a given situation.) Robin Hanson, who joined the movement and became renowned for creating prediction markets, described attending multilevel Extropian parties at big houses in Palo Alto at the time. 'And I was energized by them, because they were talking about all these interesting ideas. And my wife was put off because they were not very well presented, and a little weird,' he said. 'We all thought of ourselves as people who were seeing where the future was going to be, and other people didn't get it. Eventually—­eventually—­we'd be right, but who knows exactly when.' More's co­founder of the journal Extropy, Tom Bell, aka T. O. Morrow (Bell claims that Morrow is a distinct persona and not simply a pen name), wrote about systems of 'polycentric law' that could arise organically from voluntary transactions between agents free of government interference, and of 'Free Oceana,' a potential Extropian settlement on a man-­made floating island in international waters. (Bell ended up doing pro bono work years later for the Seasteading Institute, for which Thiel provided seed funding.) If this all sounds more than a bit libertarian, that's because it was. The WIRED article opens at one such Extropian gathering, during which an attendee shows up dressed like the 'State,' wearing a vinyl bustier, miniskirt, and chain harness top and carrying a riding crop, dragging another attendee dressed up as 'the Taxpayer' on a leash on all fours. The mailing list and broader Extropian community had only a few hundred members, but among them were a number of famous names, including Hanson; Marvin Minsky, the Turing Award–winning scientist who founded MIT's AI lab in the late 1950s; Ray Kurzweil, the computer scientist and futurist whose books would turn 'the singularity' into a household word; Nick Bostrom, the Swedish philosopher whose writing would do the same for the supposed 'existential risk' posed by AI; Julian Assange, a decade before he founded WikiLeaks; and three people—­Nick Szabo, Wei Dai, and Hal Finney—­rumored to either be or be adjacent to the pseudonymous creator of Bitcoin, Satoshi Nakamoto. 'It is clear from even a casual perusal of the Extropians archive (maintained by Wei Dai) that within a few months, teenage Eliezer Yudkowsky became one of this extraordinary cacophony's preeminent voices,' wrote the journalist Jon Evans in his history of the movement. In 1996, at age 17, Yudkowsky argued that superintelligences would be a great improvement over humans, and could be here by 2020. Two members of the Extropian community, internet entrepreneurs Brian and Sabine Atkins—­who met on an Extropian mailing list in 1998 and were married soon after—­were so taken by this message that in 2000 they bankrolled a think tank for Yudkowsky, the Singularity Institute for Artificial Intelligence. At 21, Yudkowsky moved to Atlanta and began drawing a nonprofit salary of around $20,000 a year to preach his message of benevolent superintelligence. 'I thought very smart things would automatically be good,' he said. Within eight months, however, he began to realize that he was wrong—­way wrong. AI, he decided, could be a catastrophe. 'I was taking someone else's money, and I'm a person who feels a pretty deep sense of obligation towards those who help me,' Yudkowsky explained. 'At some point, instead of thinking, 'If superintelligences don't automatically determine what is the right thing and do that thing that means there is no real right or wrong, in which case, who cares?' I was like, 'Well, but Brian Atkins would probably prefer not to be killed by a superintelligence.' ' He thought Atkins might like to have a 'fallback plan,' but when he sat down and tried to work one out, he realized with horror that it was impossible. 'That caused me to actually engage with the underlying issues, and then I realized that I had been completely mistaken about everything.' The Atkinses were understanding, and the institute's mission pivoted from making artificial intelligence to making friendly artificial intelligence. 'The part where we needed to solve the friendly AI problem did put an obstacle in the path of charging right out to hire AI researchers, but also we just surely didn't have the funding to do that,' Yudkowsky said. Instead, he devised a new intellectual framework he dubbed 'rationalism.' (While on its face, rationalism is the belief that humankind has the power to use reason to come to correct answers, over time it came to describe a movement that, in the words of writer Ozy Brennan, includes 'reductionism, materialism, moral non-­realism, utilitarianism, anti-­deathism and transhumanism.' Scott Alexander, Yudkowsky's intellectual heir, jokes that the movement's true distinguishing trait is the belief that 'Eliezer Yudkowsky is the rightful calif.') In a 2004 paper, 'Coherent Extrapolated Volition,' Yudkowsky argued that friendly AI should be developed based not just on what we think we want AI to do now, but what would actually be in our best interests. 'The engineering goal is to ask what humankind 'wants,' or rather what we would decide if we knew more, thought faster, were more the people we wished we were, had grown up farther together, etc.,' he wrote. In the paper, he also used a memorable metaphor, originated by Bostrom, for how AI could go wrong: If your AI is programmed to produce paper clips, if you're not careful, it might end up filling the solar system with paper clips. In 2005, Yudkowsky attended a private dinner at a San Francisco restaurant held by the Foresight Institute, a technology think tank founded in the 1980s to push forward nanotechnology. (Many of its original members came from the L5 Society, which was dedicated to pressing for the creation of a space colony hovering just behind the moon, and successfully lobbied to keep the United States from signing the United Nations Moon Agreement of 1979 due to its provision against terraforming celestial bodies.) Thiel was in attendance, regaling fellow guests about a friend who was a market bellwether, because every time he thought some potential investment was hot, it would tank soon after. Yudkowsky, having no idea who Thiel was, walked up to him after dinner. 'If your friend was a reliable signal about when an asset was going to go down, they would need to be doing some sort of cognition that beat the efficient market in order for them to reliably correlate with the stock going downwards,' Yudkowsky said, essentially reminding Thiel about the efficient-market hypothesis, which posits that all risk factors are already priced into markets, leaving no room to make money from anything besides insider information. Thiel was charmed. Thiel and Yudkowsky began having occasional dinners together. Yudkowsky came to regard Thiel 'as something of a mentor figure,' he said. In 2005, Thiel started funding Yudkowsky's Singularity Institute, and the following year they teamed up with Ray Kurzweil—­whose book The Singularity Is Near had become a bestseller—­to create the Singularity Summit at Stanford University. Over the next six years, it expanded to become a prominent forum for futurists, transhumanists, Extropians, AI researchers, and science fiction authors, including Bostrom, More, Hanson, Stanford AI professor Sebastian Thrun, XPrize founder Peter Diamandis, and Aubrey de Grey, a gerontologist who claims humans can eventually defeat aging. Skype co­founder Jaan Tallinn, who participated in the summit, was inspired by Yudkowsky to become one of the primary funders of research dedicated to reducing existential risk from AI. Another summit participant, physicist Max Tegmark, would go on to co-found the Future of Life Institute. Vernor Vinge himself even showed up, looking like a public school chemistry teacher with his Walter White glasses and tidy gray beard, cheerfully reminding the audience that when the singularity comes, 'We're no longer in the driver's seat.' In 2010, one of the AI researchers whom Yudkowsky invited to speak at the summit was Shane Legg, a New Zealand–­born mathematician, computer scientist, and ballet dancer who had been obsessed with building superintelligence ever since Yudkowsky had introduced him to the idea a decade before. Legg had been working at Intelligenesis, a New York–­based startup founded by the computer scientist Ben Goertzel that was trying to develop the world's first AI. Its best-­known product was WebMind, an ambitious software project that attempted to predict stock market trends. Goertzel, who had a PhD in mathematics, had been an active poster on the Extropians mailing list for years, sparring affectionately with Yudkowsky on transhumanism and libertarianism. (He was in favor of the former but not so much the latter.) Back in 2000, Yudkowsky came to speak at Goertzel's company (which would go bankrupt within a year). Legg points to the talk as the moment when he started to take the idea of superintelligence seriously, going beyond the caricatures in the movies. Goertzel and Legg began referring to the concept as 'artificial general intelligence.' Legg went on to get his own PhD, writing a dissertation, 'Machine Super Intelligence,' that noted the technology could become an existential threat, and then moved into a postdoctoral fellowship at University College London's Gatsby Computational Neuroscience Unit, a lab that encompassed neuroscience, machine learning, and AI. There, he met a gaming savant from London named Demis Hassabis, the son of a Singaporean mother and Greek Cypriot father. Hassabis had once been the second-­ranked chess player in the world under the age of 14. Now he was focused on building an AI inspired by the human brain. Legg and Hassabis shared a common, deeply unfashionable vision. 'It was basically eye-­rolling territory,' Legg told the journalist Cade Metz. 'If you talked to anybody about general AI, you would be considered at best eccentric, at worst some kind of delusional, nonscientific character.' Legg thought it could be built in the academy, but Hassabis, who had already tried a startup and failed, knew better. The only way to do it was through industry. And there was one investor who would be an obvious place to start: Peter Thiel. Legg and Hassabis came to the 2010 Singularity Summit as presenters, yes, but really to meet Thiel, who often invited summit participants to his townhouse in San Francisco, according to Metz's account. Hassabis spoke on the first day of the summit, which had moved to a hotel in downtown San Francisco, outlining his vision for an AI that took inspiration from the human brain. Legg followed the next day with a talk on how AI needed to be measurable to move forward. Afterward, they went for cocktails at Thiel's Marina District home, with its views of both the Golden Gate Bridge and the Palace of Fine Arts, and were delighted to see a chessboard out on a table. They wove through the crowd and found Yudkowsky, who led them over to Thiel for an introduction. Trying to play it cool, Hassabis skipped the hard sell and began with chess, a topic he knew was dear to Thiel's heart. The game had stood the test of time, Hassabis said, because the knight and bishop had such an interesting tension—­equal in value, but profoundly different in strengths and weaknesses. Thiel invited them to return the next day to tell him about their startup. In the morning, they pitched Thiel, fresh from a workout, across his dining room table. Hassabis said they were building AGI inspired by the human brain, would initially measure its progress by training it to play games, and were confident that advances in computing power would drive their breakthroughs. Thiel balked at first, but over the course of weeks agreed to invest $2.25 million, becoming the as-­yet-­unnamed company's first big investor. A few months later, Hassabis, Legg, and their friend, the entrepreneur Mustafa Suleyman, officially co­founded DeepMind, a reference to the company's plans to combine 'deep learning,' a type of machine learning that uses layers of neural networks, with actual neuroscience. From the beginning, they told investors that their goal was to develop AGI, even though they feared it could one day threaten humanity's very existence. It was through Thiel's network that DeepMind recruited his fellow PayPal veteran Elon Musk as an investor. Thiel's Founders Fund, which had invested in Musk's rocket company, SpaceX, invited Hassabis to speak at a conference in 2012, and Musk was in attendance. Hassabis laid out his 10-­year plan for DeepMind, touting it as a 'Manhattan Project' for AI years before Altman would use the phrase. Thiel recalled one of his investors joking on the way out that the speech was impressive, but he felt the need to shoot Hassabis to save the human race. The next year, Luke Nosek, a cofounder of both PayPal and Founders Fund who is friends with Musk and sits on the SpaceX board, introduced Hassabis to Musk. Musk took Hassabis on a tour of SpaceX's headquarters in Los Angeles. When the two settled down for lunch in the company cafeteria, they had a cosmic conversation. Hassabis told Musk he was working on the most important thing in the world, a superintelligent AI. Musk responded that he, in fact, was working on the most important thing in the world: turning humans into an interplanetary species by colonizing Mars. Hassabis responded that that sounded great, so long as a rogue AI did not follow Musk to Mars and destroy humanity there too. Musk got very quiet. He had never really thought about that. He decided to keep tabs on DeepMind's technology by investing in it. In December 2013, Hassabis stood on stage at a machine-learning conference at Harrah's in Lake Tahoe and demonstrated DeepMind's first big breakthrough: an AI agent that could learn to play and then quickly master the classic Atari video game Breakout without any instruction from humans. DeepMind had done this with a combination of deep neural networks and reinforcement learning, and the results were so stunning that Google bought the company for a reported $650 million a month later. The implications of DeepMind's achievement—­which was a major step toward a general-­purpose intelligence that could make sense of a chaotic world around it and work toward a goal—­were not widely understood until the company published a paper on its findings in the journal Nature more than a year later. But Thiel, as a DeepMind investor, understood them well, and discussed them with Altman. In February 2014, a month after Google bought DeepMind, Altman wrote a post on his personal blog titled 'AI' that declared the technology the most important tech trend that people were not paying enough attention to. 'To be clear, AI (under the common scientific definition) likely won't work. You can say that about any new technology, and it's a generally correct statement. But I think most people are far too pessimistic about its chances,' he wrote, adding that 'artificial general intelligence might work, and if it does, it will be the biggest development in technology ever.' A little more than a year later, Altman teamed up with Elon Musk to cofound OpenAI as a noncorporate counterweight to Google's DeepMind. And with that, the race to build artificial general intelligence was on. This was a race that Yudkowsky had helped set off. But as it picked up speed, Yudkowsky himself was growing increasingly alarmed about what he saw as the extinction-level danger it posed. He was still influential among investors, researchers, and eccentrics, but now as a voice of extreme caution. Yudkowsky was not personally involved in OpenAI, but his blog, LessWrong, was widely read among the AI researchers and engineers who worked there. (While still at Stripe, OpenAI cofounder Greg Brockman had organized a weekly LessWrong reading group.) The rationalist ideas Yudkowsky espoused overlapped significantly with those of the Effective Altruism movement, which was turning much of its attention to preventing existential risk from AI. A few months after this race spilled into full public view with OpenAI's release of ChatGPT in November 2022, Yudkowsky published an essay in Time magazine arguing that unless the current wave of generative AI research was halted, 'literally everyone on Earth will die.' Thiel felt that Yudkowsky had become 'extremely black-pilled and Luddite.' And two of OpenAI's board members had ties to Effective Altruism. Less than a week before Altman was briefly ousted as CEO in the fall of 2023, Thiel warned his friend, 'You don't understand how Eliezer has programmed half the people in your company to believe this stuff.' Thiel's warning came with some guilt that he had created the many-headed monster that was now coming for his friend. Excerpt adapted from The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by Keach Hagey. Published by arrangement with W. W. Norton & Company. Copyright © 2025 by Keach Hagey.

Wax London to enter US retail market with The Optimist residency
Wax London to enter US retail market with The Optimist residency

Fashion United

time29-04-2025

  • Business
  • Fashion United

Wax London to enter US retail market with The Optimist residency

British menswear label Wax London is set to officially enter the US market through an exclusive residency with Los Angeles retailer The Optimist. The US retail debut marks the brand's first physical presence in the region, where it had until now only been available via its online e-commerce platform. From May 8, Wax London will now take up residency at the menswear store, located at Platform, Culver City, where it will offer its full menswear collection throughout the month. Co-founded in 2015 by Tom Holmes, who also serves as designer to the brand, Wax London draws inspiration from global travels and British heritage to inform collections of 'characterful clothing made from responsibly sourced fabrics', a press release states. The company already operates three physical stores in London, the UK, its home base, having first ventured into retail in 2020 after previously following a strategy dedicated to wholesale and online sales. Its presence in The Optimist is apt, considering the store's emphasis on sleek menswear lines. Opened in 2019, the retailer typically houses US exclusive brands or pieces in an elevated setting, complete with custom furnishings, in-house consultations and cultural programming.

Here's why Ford Fry keeps opening restaurants, including new Mexican concept, in Nashville
Here's why Ford Fry keeps opening restaurants, including new Mexican concept, in Nashville

Yahoo

time27-03-2025

  • Entertainment
  • Yahoo

Here's why Ford Fry keeps opening restaurants, including new Mexican concept, in Nashville

For restaurateurs like Ford Fry, Nashville is very much an "It city." The Atlanta-based chef has opened multiple restaurants in the city, including The Optimist and Star Rover. Now, he's bringing the forthcoming new casual Mexican concept, Little Rey. Little Rey will open this spring at 2019 West End Avenue after a series of events to introduce the concept to the Nashville community, including a March 29 party with free food and drinks in the restaurant's parking lot (more details below). The restaurant's menu is in part built around pollo al carbon, or coal-roasted chicken. "It's based around northern Mexico chicken al carbon restaurants," he said. "We're selling chicken cooked over live coals, breakfast tacos, adding wood-cooked meat over salads and queso." The northern Mexico-Texas concept is a hit in Atlanta, where it has grown to two locations. The restaurant sells weekend breakfast tacos, a variety of traditional tacos and queso drenched fries. It's also beloved by local families for its take-home meals of whole roasted chickens, tortillas and salsa. Fry thinks the restaurant's location near Vanderbilt University will boost its popularity, and his broker has told him as much. But he's not taking anything for granted. "When we opened Little Rey in Atlanta, there were lines around the block," he said. "But when we go out of town, people don't know us like they do in Atlanta, so we do what we can to get the word out." Live-fire cooking, which will take place at the March 29 event, is a good way to send up a literal smoke signal, he said. This is, after all, the first expansion into Tennessee for the small fast-casual chain, which also has a store in Texas and another in North Carolina. But it's not likely to be the last restaurant in the Nashville community for Fry, a James Beard Award semi-finalist for Outstanding Restaurateur who also now has around a dozen unique concepts under his belt. "I think we definitely want to come to cities that feel like 'our people' and I want any reason to come to Nashville," Fry said. "I've even thought about moving there." Fry was headed to Nashville as he spoke, where he was looking to potentially to open a steakhouse at a spot in the under-renovation Arcade. More: Nashville's historic Arcade mall to debut new look, shops in 2025: What we know "Nashville is just the place everyone wants to move to for some reason," he said. That includes tourists and, increasingly, restaurant owners. And that's created tough competition. But one that seems by and large a friendly one. More: 'How much I believe in Nashville': Why Music City draws in businesses like Craig's, Hermès Rather than being suspicious of outsiders, locals seem to welcome transplants such as chef Mason Hereford, who recently opened his first Turkey and the Wolf outside of New Orleans in an East Nashville neighborhood. "More than anything, we're super excited to be joining the Nashville community," Hereford told The Tennessean weeks before the restaurant opened. "Food and beverage people from around the city have been over the top welcoming, and we're loving making new friends and having a new city to call home." That welcoming attitude has also made for a rich city with ever-more diverse restaurants. Fry's Mexican restaurant is part of a wave to hit the city in recent months, and there are more on the way. More: Upscale Mexican food, restaurants take off in Nashville thanks to bold chefs Little Rey is on the casual end of that Mexican restaurant boom, and Fry thinks that's a good thing. "I think people in Nashville like food that tastes good don't care about pretentious stuff, and they'll line out the door if it's good," he said. There's no set opening date for Little Rey yet, but you can try some of the food on March 29 with a parking lot party featuring free food including quesadillas, elotes, a salsa bar, margaritas and limeade. The Little Rey Park-in Lot Party will take over 2019 West End Avenue from 2-9 p.m. Mackensy Lunsford is the senior dining reporter for The Tennessean. You can reach her at mlunsford@ This article originally appeared on Nashville Tennessean: Nashville food: Chef Ford Fry to open Mexican restaurant Little Rey

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store