Latest news with #AllenInstituteforArtificialIntelligence


Time of India
03-08-2025
- Business
- Time of India
Who is Matt Deitke? The 24-year-old AI researcher and PhD dropout behind Meta's $250 million offer
In an era where artificial intelligence is rapidly redrawing the boundaries of human capability, the competition to secure the brightest minds has intensified into an all-out global race. Amidst billion-dollar investments, multimodal breakthroughs, and the relentless pursuit of artificial general intelligence, one name has recently emerged as the embodiment of this new wave of AI ambition, Matt Deitke. At just 24, Deitke has done what few in any field can claim: Walked away from a prestigious PhD, co-founded a startup at the edge of AI autonomy, and turned down a nine-figure job offer, only to see it doubled by one of the most powerful tech CEOs on the planet. This is not a story of overnight success. It's the tale of a young researcher whose talent, timing, and tenacity are rewriting the rules of how AI careers unfold, and whose trajectory now sits at the intersection of cutting-edge science, billion-dollar bets, and the strategic future of Big Tech. The scholar who walked away Matt Deitke's early career followed a familiar path for a rising academic star. As a doctoral student in computer science at the University of Washington, he immersed himself in a field undergoing seismic change. But where others saw a traditional climb through academia, Deitke sensed the urgency of a moment that could not be paused. Rather than remain in the ivory tower, he chose to engage directly with the frontier. He left the PhD program, an unconventional but increasingly common decision among elite AI researchers, and joined the Allen Institute for Artificial Intelligence (AI2) in Seattle, a renowned research hub founded by Microsoft co-founder Paul Allen. There, he didn't just contribute; he led. Deitke spearheaded the development of Molmo, a chatbot built not only to process text but to understand images and audio, ushering in a more human-like form of machine understanding. This multimodal capacity represents one of the most important advances in AI today, and Deitke was already at its core. Recognition and reinvention Deitke's work quickly caught the attention of the global AI community. At NeurIPS 2022, one of the most prestigious conferences in machine learning, he received an Outstanding Paper Award, an accolade that signals a researcher's arrival on the world stage. But Deitke wasn't content with accolades. In 2023, he co-founded Vercept, a startup focused on building autonomous AI agents that don't just interpret the web, but navigate it and act within it. The idea was radical: Systems that can take goals and execute tasks across the internet, mimicking the autonomy of human behaviour in digital environments. The startup, though lean with just ten team members, raised $16.5 million from a high-profile group of investors that included Eric Schmidt, the former CEO of Google. Vercept represents the vanguard of where AI is headed, beyond chatbots and recommendation engines, toward agents capable of real-world digital action. And at its helm was a 24-year-old who had already turned down one of the biggest job offers in tech history. Meta 's $250 million bet When Meta first approached Deitke with an offer reportedly worth $125 million over four years, it was already a headline-making move. But in a dramatic twist, Deitke declined. That rejection prompted a personal meeting with Mark Zuckerberg, who made a counter offer that stunned even seasoned Silicon Valley observers: $250 million. The deal, among the most generous compensation packages ever extended to a researcher of any age, was emblematic of Meta's increasingly aggressive AI recruitment strategy. It recently onboarded Ruoming Pang, the former leader of Apple's AI models team, in a package reportedly exceeding $200 million. In 2025 alone, Meta is expected to spend $72 billion on capital expenditure—including massive investments in compute infrastructure and AI talent. A new model for AI careers Matt Deitke's story is more than a tale of youth and fortune; it's a parable for the new reality of AI. The boundaries between academia, industry, and entrepreneurship are no longer rigid. In fact, they are dissolving. Researchers now operate in a landscape where intellectual achievement can translate into unprecedented wealth, influence, and impact. Yet Deitke's choices reflect more than opportunism. They show strategic clarity. Rather than lock himself into a single institution or trajectory, he has navigated the ecosystem with autonomy, mirroring the very kind of AI he seeks to build. At 24, Matt Deitke stands not only as a prodigy but as a prototype: The kind of polymath-entrepreneur-researcher hybrid that today's AI revolution demands. Whether at Vercept or Meta, his work will likely shape the tools, agents, and intelligence systems that define the coming decade. As Silicon Valley, academia, and the global tech community look to the future of artificial intelligence, one thing is increasingly clear: Matt Deitke isn't just along for the ride; he's driving the evolution. Ready to navigate global policies? Secure your overseas future. Get expert guidance now!


CNN
24-07-2025
- Politics
- CNN
Are AI models ‘woke'? The answer isn't so simple
President Donald Trump wants to make the United States a leader in artificial intelligence – and that means scrubbing AI models of what he believes are 'woke' ideals. The president on Wednesday said he signed an executive order prohibiting the federal government from procuring AI technology that has 'been infused with partisan bias or ideological agendas such as critical race theory.' It's an indication that his push against diversity, equity and inclusion is now expanding to the technology that some expect to be as critical for finding information online as the search engine. The move is part of the White House's AI action plan announced on Wednesday, a package of initiatives and policy recommendations meant to push the US forward in AI. The 'preventing woke AI in the federal government' executive order requires government-used AI large language models – the type of models that power chatbots like ChatGPT – adhere to Trump's 'unbiased AI principles,' including that AI be 'truth-seeking' and show 'ideological neutrality.' 'From now on, the US government will deal only with AI that pursues truth, fairness and strict impartiality,' he said during the event. It brings up an important question: Can AI be ideologically biased, or 'woke?' It's not such a straightforward answer, according to experts. AI models are largely a reflection of the data they're trained on, the feedback they receive during that training process and the instructions they're given – all of which influence whether an AI chatbot provides an answer that seems 'woke,' which is itself a subjective term. That's why bias in general, political or not, has been a sticking point for the AI industry. 'AI models don't have beliefs or biases the way that people do, but it is true that they can exhibit biases or systematic leanings, particularly in response to certain queries,' Oren Etzioni, former CEO of the Seattle-based AI research nonprofit the Allen Institute for Artificial Intelligence, told CNN. Trump's executive order includes two 'unbiased AI principles.' The first one, called 'truth seeking,' says large language models – the type of models that power chatbots like ChatGPT – should 'be truthful in seeking factual information or analysis.' That means they should prioritize factors like historical accuracy and scientific inquiry when asked for factual answers, according to the order. The second principle, 'ideological neutrality,' says large language models used for government work should be 'neutral' and 'nonpartisan' and that they shouldn't manipulate responses 'in favor of ideological dogmas such as DEI.' 'In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex,' the executive order says. Developers shouldn't 'intentionally code partisan or ideological judgements' into the model's responses unless the user prompts them to do so, the order says. The focus is primarily on AI models procured by the government, as the order says the federal government should be 'hesitant to regulate the functionality of AI models in the private marketplace.' But many major technology companies have contracts with the government; Google, OpenAI, Anthropic and xAI were each awarded $200 million to 'accelerate Department of Defense adoption of advanced AI capabilities' earlier this month, for example. The new directive builds on Trump's longstanding claims of bias in the tech industry. In 2019, during Trump's first term, the White House urged social media users to file a report if they believe they've been 'censored or silenced online' on sites like Twitter, now named X, and Facebook because of political bias. However, Facebook data found in 2020 that conservative news content significantly outperformed more neutral content on the platform. Trump also signed an executive order in 2020 targeting social media companies after Twitter labeled two of his posts as potentially misleading. On Wednesday, Senator Edward Markey (D-Massachusetts) said he sent letters to the CEOs of Google parent Alphabet, Anthropic, OpenAI, Meta, Microsoft and xAI, pushing back against Trump's 'anti-woke AI actions.' 'Even if the claims of bias were accurate, the Republicans' effort to use their political power — both through the executive branch and through congressional investigations — to modify the platforms' speech is dangerous and unconstitutional,' he wrote. While bias can mean different things to different people, some data suggests people see political bents in certain AI responses. A paper from the Stanford Graduate School of Business published in May found that Americans view responses from certain popular AI models as being slanted to the left. Brown University research from October 2024 also found that AI tools can be altered to take stances on political topics. 'I don't know whether you want to use the word 'biased' or not, but there's definitely evidence that, by default, when they're not personalized to you … the models on average take left wing positions,' said Andrew Hall, a professor of political economy at Stanford Graduate School of Business who worked on the May research paper. That's likely because of how AI chatbots learn to formulate responses: AI models are trained on data, such as text, videos and images from the internet and other sources. Then humans provide feedback to help the model determine the quality of its answers. Changing AI models to tweak their tone could also result in unintended side effects, Himanshu Tyagi, a professor at the Indian Institute of Science and co-founder of AI company Sentient, previously told CNN. One adjustment, for example, might cause another unexpected change in how a model works. 'The problem is that our understanding of unlocking this one thing while affecting others is not there,' Tyagi told CNN earlier this month. 'It's very hard.' Elon Musk's Grok AI chatbot spewed antisemitism in response to user prompts earlier this month. The outburst happened after xAI — the Musk-led tech company behind Grok — added instructions for the model to 'not shy away from making claims which are politically incorrect,' according to system prompts for the chatbot publicly available on software developer platform Github and spotted by The Verge. xAI apologized for the chatbot's behavior and attributed it to a system update. In other instances, AI has struggled with accuracy. Last year, Google temporarily paused its Gemini chatbot's ability to generate images of humans after it was criticized for creating images that included people of color in contexts that were historically inaccurate. Hall, the Stanford professor, has a theory about why AI chatbots may produce answers that people view as slanted to the left: Tech companies may have put extra guardrails in place to prevent their chatbots from producing content that could be deemed offensive. 'I think the companies were kind of like guarding against backlash from the left for a while, and those policies may have further created this sort of slanted output,' he said. Experts say vague descriptions like 'ideological bias' will make it challenging to shape and enforce new policy. Will there be a new system for evaluating whether an AI model has ideological bias? Who will make that decision? The executive order says vendors would comply with the requirement by disclosing the model's system prompt, or set of backend instructions that guide how LLM's respond to queries, along with its 'specifications, evaluations or other relevant documentation.' But questions still remain about how the administration will determine whether models adhere to the principles. After all, avoiding some topics or questions altogether could be perceived as a political response, said Mark Riedl, a professor of computing at the Georgia Institute of Technology. It may also be possible to work around constraints like these by simply commanding a chatbot to respond like a Democrat or Republican, said Sherief Reda, a professor of engineering and computer science at Brown University who worked on its 2024 paper about AI and political bias. For AI companies looking to work with the government, the order could be yet another requirement companies would have to meet before shipping out new AI models and services, which could slow down innovation – the opposite of what Trump is trying to achieve with his AI action plan. 'This type of thing… creates all kinds of concerns and liability and complexity for the people developing these models — all of a sudden, they have to slow down,' said Etzioni.