logo
Nvidia hunting for S.F. office space, after CEO says the city is ‘thriving'

Nvidia hunting for S.F. office space, after CEO says the city is ‘thriving'

Nvidia CEO Jensen Huang believes San Francisco is back — and that the city's nascent artificial intelligence boom is responsible for that.
'Just about everybody evacuated San Francisco,' he said during a podcast interview earlier this month. 'Now it's thriving again. It's all because of AI.'
Regardless of whether his assessment is fact or self-interested hyperbole, Huang is now reportedly looking to get in on the action. Real estate market participants have confirmed to the Chronicle that Nvidia is on the hunt for a new, roughly 30,000-square-foot sales office in San Francisco, though it is unclear which locations it has evaluated.
The company has long been headquartered in Santa Clara, and it does not have a presence in San Francisco.
A company spokesperson declined to comment on expansion plans in the city after an inquiry from the Chronicle.
Huang founded and, over the past three decades, grew the chipmaker — which today is widely regarded as a corporate cornerstone of the AI industry — in Silicon Valley. But, the CEO has long had personal ties to the city: He owns a multimillion-dollar mansion in the Pacific Heights neighborhood.
Nvidia has a number of initiatives and programs that invest in the AI ecosystem and could stand to benefit from a San Francisco presence, including NVentures, its venture capital arm, which is based within Nvidia's headquarters campus in Santa Clara. Sources told the Chronicle that the company is looking for 'high-end' space in the city.
And while there are nuances to the city's ongoing recovery from the pandemic, which saw scores of businesses close and office vacancies rise to unprecedented levels, the recent buzz around AI in San Francisco has been hailed as a green shoot for the city's ailing downtown, and is backed by data that appears promising.
According to real estate firm CBRE, growth by AI startups could cut the city's office vacancy — which hovers around 37% — in half by 2030, with the firm forecasting that AI companies could take between 17 million and 21 million square feet of office space.
Many leases signed by AI startups have been small in size, though there have been a few major deals that are reminiscent of the years leading up to the pandemic, which saw big tech companies 'land-bank' significant chunks of office space in anticipation of future growth, resulting in much of the city's office space being spoken for at the end of 2019. The most prominent example is ChatGPT maker OpenAI, which leased close to 1 million square feet across three buildings vacated by Gap Inc. and Uber in recent years.
So far, Nvidia's expansion efforts have been focused on the area surrounding its Santa Clara campus, where it has been actively buying and leasing real estate for research and development.
Earlier this month, Nvidia spent $123 million on a 10-building office park across the street from its campus at 2788 San Tomas Expressway, and shelled out another $254 million for another three buildings it occupies close by. Last year, Nvidia bought out its former landlord of its headquarters campus for $374 million.
Nvidia is regarded as a bellwether for the AI industry, and it appears to have overcome recent tariff turbulence. Posting quarterly earnings of $44 billion, the company reported revenue growth of 69% year over year on Wednesday, despite restrictions on its chip sales in China under President Donald Trump that cost the company $2.5 billion in sales. It expects to miss out on $8 billion in revenue next quarter due to the export controls.
'China's AI moves on, with or without U.S. chips,' Huang said Wednesday. 'Export controls should strengthen U.S. platforms, not drive half of the world's AI talent to rivals.'
Despite being shut out from a Chinese market that the company expects will be worth $50 billion, Huang said he is aligned with Trump's vision to 're-shore advanced manufacturing.' He said that he expects Nvidia to build everything from chips to supercomputers in America by the end of the year.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Amazon's bleak job update exposes major AI warning for Aussie workers: 'Will reduce'
Amazon's bleak job update exposes major AI warning for Aussie workers: 'Will reduce'

Yahoo

time22 minutes ago

  • Yahoo

Amazon's bleak job update exposes major AI warning for Aussie workers: 'Will reduce'

Thousands of office workers at Amazon and Microsoft could soon be without a job as the two companies invest more heavily in artificial intelligence and outsourcing. Major warnings have been issued in recent months about how AI could create high levels of unemployment in Australia and around the world. Amazon chief executive Andy Jassy wrote a memo to staff that cautioned this tech takeover would likely hit its workforce in the coming years. But he suggested that some who lose their jobs could find new roles in other sectors. 'As we roll out more generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs,' he said. AI warning following 'extraordinary' prediction of job wipe out Centrelink age pension changes coming into effect from July 1 $1,000 ATO school fees tax deduction that Aussies don't realise they can claim 'It's hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.' Meanwhile, Bloomberg is reporting Microsoft is gearing up for another round of 'thousands' of job cuts after getting rid of 6,000 roles just last month. During that blitz, the tech company slashed the product and engineering departments; however, this potential new round of cuts could focus more on the sales teams. Microsoft warned back in April that it was looking to use third-party firms to take on software sales rather than do things of Australian companies have started throwing big sums of cash into AI to help improve their products, backend operations, and customer services. But Telstra recently admitted roles would likely be cut in the future due to this new focus. 'Our workforce will look different in 2030 as we develop new capabilities, find new ways to leverage technology, including AI, and we have to stay focused on becoming more efficient,' Vicki Brady said. 'We don't know precisely what our workforce will look like in 2030, but it will be smaller than it is today.' The head of AI leader Anthropic, Dario Amodei, suggested half of all entry-level white-collar jobs could get the flick in the US by 2030. Meanwhile, the Australian Council of Trade Unions (ACTU) said late last year that one in three workers were at risk from AI. Australia's productivity commissioner Danielle Wood isn't convinced the AI bloodbath will be that bad. Still, she didn't deny the workforce matrix we're familiar with now will inevitably go through some big changes. 'Am I going to sit here and say, 'No jobs are going to go?' No, clearly not. There will be some impacts," she told the ABC. She hoped AI would allow people to have more time for "the uniquely human parts of jobs". The Prime Minister has announced a huge summit in August a bid to fix the country's lagging productivity. The government believes AI could be a game-changer in this realm and is keen to explore ways the technology can be deployed across multiple sectors. While the ACTU is on board with AI in certain scenarios, it wants assurances for workers' rights. 'To achieve good adoption of AI, Australia needs responsible regulation which both protects Australian workers and Australian industries from malicious use and theft by overseas big tech," ACTU secretary Sally McManus said. The union is pushing for workers to have the right to refuse to use AI in fields where it would be 'inappropriate or carry undue risk', like in medical decision-making. Finance Sector Union national secretary Julia Angrisano, who has been carefully watching Australia's banking industry adopt new tech, said the roundtable will be a good opportunity to set up some ground rules. 'AI is a fundamental and growing part of the finance sector, but this growth is happening in an almost entirely unregulated and uncontrolled way,' she said. 'The economic benefits and productivity gains of AI must flow on to workers, and not just improve the profits of banks and major companies.' Australian Services Union national secretary Emeline Gaske went as far as saying workers should be 'fairly compensated' for using AI. 'AI will undoubtedly reshape the way we work, but we can't lose sight of the people behind the progress,' she said. 'Workers whose knowledge, experience and judgement are used to train and refine these systems deserve to be recognised."Sign in to access your portfolio

AI users have to choose between accuracy or sustainability
AI users have to choose between accuracy or sustainability

Fast Company

time33 minutes ago

  • Fast Company

AI users have to choose between accuracy or sustainability

PREMIUM New research shows the smarter and more capable AI models become, the larger their environmental impact. [Images: hramovnick/Adobe Stock; yLemon/Adobe Stock] BY Listen to this Article More info 0:00 / 2:38 Cheap or free access to AI models keeps improving, with Google the latest firm to make its newest models available to all users, not just paying ones. But that access comes with one cost: the environment. In a new study, German researchers tested 14 large language models (LLMs) of various sizes from leading developers such as Meta, Alibaba, and others. Each model answered 1,000 difficult academic questions spanning topics from world history to advanced mathematics. The tests ran on a powerful, energy-intensive NVIDIA A100 GPU, using a specialized framework to precisely measure electricity consumption per answer. This data was then converted into carbon dioxide equivalent emissions, providing a clear comparison of each model's environmental impact. The researchers found that many LLMs are far more powerful than needed for everyday queries. Smaller, less energy-hungry models can answer many factual questions just as well. The carbon and water footprints of a single prompt vary dramatically depending on model size and task type. Prompts requiring reasoning, which force models to 'think aloud,' are especially polluting because they generate many more tokens. One model, Cogito, topped the accuracy table—answering nearly 85% of questions correctly—but produced three times more emissions than similar-sized models, highlighting a trade-off rarely visible to AI developers or users. (Cogito did not respond to a request for comment.) 'Do we really need a 400-billion parameter GPT model to answer when World War II was, for example,' says Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences and one of the study's authors. advertisement The final deadline for Fast Company's Next Big Things in Tech Awards is Friday, June 20, at 11:59 p.m. PT. Apply today. Subscribe to see the rest. Already Subscribed? Login. GET UNLIMITED ACCESS TO FAST COMPANY Enjoy the latest trends from the world-leading progressive business media brand just $1 Join for $1 Sign up for our weekly tech digest. SIGN UP This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Privacy Policy ABOUT THE AUTHOR Chris Stokel-Walker is a contributing writer at Fast Company who focuses on the tech sector and its impact on our daily lives—online and offline. He has explored how the WordPress drama has implications for the wider web, how AI web crawlers are pushing sites offline, as well as stories about ordinary people doing incredible things, such as the German teen who set up a MySpace clone with more than a million users. More Explore Topics Artificial Intelligence

Why Some AI Models Spew 50 Times More Greenhouse Gas to Answer the Same Question
Why Some AI Models Spew 50 Times More Greenhouse Gas to Answer the Same Question

Gizmodo

time36 minutes ago

  • Gizmodo

Why Some AI Models Spew 50 Times More Greenhouse Gas to Answer the Same Question

Like it or not, large language models have quickly become embedded into our lives. And due to their intense energy and water needs, they might also be causing us to spiral even faster into climate chaos. Some LLMs, though, might be releasing more planet-warming pollution than others, a new study finds. Queries made to some models generate up to 50 times more carbon emissions than others, according to a new study published in Frontiers in Communication. Unfortunately, and perhaps unsurprisingly, models that are more accurate tend to have the biggest energy costs. It's hard to estimate just how bad LLMs are for the environment, but some studies have suggested that training ChatGPT used up to 30 times more energy than the average American uses in a year. What isn't known is whether some models have steeper energy costs than their peers as they're answering questions. Researchers from the Hochschule München University of Applied Sciences in Germany evaluated 14 LLMs ranging from 7 to 72 billion parameters—the levers and dials that fine-tune a model's understanding and language generation—on 1,000 benchmark questions about various subjects. LLMs convert each word or parts of words in a prompt into a string of numbers called a token. Some LLMs, particularly reasoning LLMs, also insert special 'thinking tokens' into the input sequence to allow for additional internal computation and reasoning before generating output. This conversion and the subsequent computations that the LLM performs on the tokens use energy and releases CO2. The scientists compared the number of tokens generated by each of the models they tested. Reasoning models, on average, created 543.5 thinking tokens per question, whereas concise models required just 37.7 tokens per question, the study found. In the ChatGPT world, for example, GPT-3.5 is a concise model, whereas GPT-4o is a reasoning model. This reasoning process drives up energy needs, the authors found. 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach,' study author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences, said in a statement. 'We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models.' The more accurate the models were, the more carbon emissions they produced, the study found. The reasoning model Cogito, which has 70 billion parameters, reached up to 84.9% accuracy—but it also produced three times more CO2 emissions than similarly sized models that generate more concise answers. 'Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies,' said Dauner. 'None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly.' CO2 equivalent is the unit used to measure the climate impact of various greenhouse gases. Another factor was subject matter. Questions that required detailed or complex reasoning, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, according to the study. There are some caveats, though. Emissions are very dependent on how local energy grids are structured and the models that you examine, so it's unclear how generalizable these findings are. Still, the study authors said they hope that the work will encourage people to be 'selective and thoughtful' about the LLM use. 'Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,' Dauner said in a statement.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store