
"Pretty Workable": Ex-OpenAI Employee On Company's Email-Free Workplace
Calvin French-Owen, who worked at OpenAI from May 2024 to June 2025, claimed that during his tenure at the company, he received only about 10 emails.
In a detailed blog post, Mr French-Owen claimed that Slack, not email, was OpenAI's primary means of employee communication.
"An unusual part of OpenAI is that everything, and I mean everything, runs on Slack," he wrote in the blog post published on Tuesday.
"There is no email. I maybe received 10 emails in my entire time there. If you aren't organised, you will find this incredibly distracting," Mr French-Owen added.
Compared to other digital titans, this policy of prioritising Slack over email is unconventional. Despite Slack's shortcomings, which Mr French-Owen described as "incredibly distracting," he said that it can be "pretty workable" if users carefully control channel overload and notification settings.
He said his 14-month employment with the company was demanding, covert, and unrelentingly high-pressure, with "vibes" on social media carrying unexpected weight.
He characterised a fast-paced, bottom-up culture in which projects come up on their own, move quickly, and occasionally run into each other.
"OpenAI is incredibly bottoms-up, especially in research... Rather than a grand 'master plan', progress is iterative and uncovered as new research bears fruit," Mr French-Owen said.
Speaking of OpenAI's culture, Mr French-Owen said the company expanded exponentially.
There were just over 1,000 employees at the company when he started, and after a year, it had surpassed 3,000, and Mr French-Owen was among the top 30 per cent by tenure.
"Nearly everyone in leadership is doing a drastically different job than they were 2-3 years ago," he added.
He also refuted the claim that OpenAI was careless with security. In addition to more abstract, long-term risks, Mr French-Owen observed a strong emphasis on practical risks, such as political bias, hate speech, and quick injection.
Mr French-Owen, a former startup founder whose company, Segment, was purchased by Twilio, claimed that he left OpenAI because he was exhausted and had a strong desire to lead rather than follow, and not because of any drama.
Although he expressed "deep conflict" about quitting, he said that he felt "lucky" to have been a member of the elite team developing Codex, OpenAI's grandiose software engineering agent.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

First Post
23 minutes ago
- First Post
ChatGPT co-creator appointed head of Meta AI Superintelligence Lab
Meta CEO Mark Zuckerberg announced that Shengjia Zhao, co-creator of OpenAI's ChatGPT, will serve as the chief scientist of Meta Superintelligence Labs. The move came months after Meta went on to poach AI talent from competitors read more Meta CEO Mark Zuckerberg announced that Shengjia Zhao, co-creator of OpenAI's ChatGPT, will serve as the chief scientist of Meta Superintelligence Labs. X / @alexandr_wang Facebook co-founder and Meta CEO Mark Zuckerberg announced Shengjia Zhao, co-creator of OpenAI's ChatGPT, as the new chief scientist of Meta Superintelligence Labs. It is pertinent to note that Zhao was one of several strategic hires in Zuckerberg's multi-billion-dollar hiring spree. In the announcement, Zuckerberg said that Zhao's name as the co-founder of Meta Superintelligence Labs and its lead scientist was locked in 'from day one'. 'Now that our recruiting is going well and our team is coming together, we have decided to formalise his leadership role,' he added. STORY CONTINUES BELOW THIS AD The ChatGPT co-creator would directly report to Zuckerberg and Alexandr Wang, the former CEO of Scale AI, who is now Meta's chief AI officer. 'Shengjia has already pioneered several breakthroughs, including a new scaling paradigm,m and distinguished himself as a leader in the field,' the Meta CEO said in a social media post. 'I'm looking forward to working closely with him to advance his scientific vision. The next few years are going to be very exciting!' he concluded. The man behind ChatGPT Apart from creating the renowned AI chatbot, Zhao has played an instrumental role in developing GPT-4, mini models, 4.1, and o3, CNBC reported. In the past, he has also led synthetic data efforts at an AI research company. In a separate post, Wang also celebrated Zhao's inclusion in the team. 'We are excited to announce that @shengjia_zhao will be the Chief Scientist of Meta Superintelligence Labs! Shengjia is a brilliant scientist who most recently pioneered a new scaling paradigm in his research. He will lead our scientific direction for our team," he wrote in a post. We are excited to announce that @shengjia_zhao will be the Chief Scientist of Meta Superintelligence Labs! Shengjia is a brilliant scientist who most recently pioneered a new scaling paradigm in his research. He will lead our scientific direction for our team. Let's go 🚀 — Alexandr Wang (@alexandr_wang) July 25, 2025 The announcement came just months after reports emerged that Meta has spent billions of dollars hiring AI talents from Google, OpenAI, Apple and Anthropic. STORY CONTINUES BELOW THIS AD Apart from this, the tech giant also acquired ScaleAI for a whopping $14 billion and made its CEO Meta's chief AI officer. Zuckerberg made it clear that his company would spend hundreds of billions of dollars on building huge AI data centres in the US. Hence, it will be interesting to see how Meta performs in an already competitive market.
&w=3840&q=100)

Business Standard
an hour ago
- Business Standard
Ola's AI venture Krutrim lays off over 100, axes Kruri's linguistics team
Bhavish Aggarwal's artificial intelligence (AI) startup Krutrim has initiated a second wave of layoffs, just weeks after launching its flagship assistant Kruti. According to a report by The Economic Times, more than 100 employees, primarily from the linguistics division, were let go last week, following a smaller round of job cuts in June. The downsizing comes even as Krutrim positions Kruti as India's answer to OpenAI's ChatGPT and Google's Gemini, with ambitions rooted in localisation, multilingual capabilities, and voice-first interactivity tailored to the Indian market. Kruti's language training nears completion In a statement, Krutrim said the layoffs are part of a 'strategic realignment' to build 'leaner, more agile teams", aligning with evolving business priorities. The company declined to confirm exact figures but cautioned against 'publishing unverified reports.' Citing multiple sources, The Economic Times reported that the cuts heavily impacted linguists hired for full-time roles across 10 Indian languages, including Tamil, Odia, Telugu, and Marathi. Many employees had relocated to Bengaluru just months ago. The linguistics team had reportedly grown to around 600 people before the reductions. Krutrim cuts funding target amid tepid investor interest Krutrim had become a unicorn in 2024 after raising $50 million from Z47 Partners. Around the same time, it launched Krutrim AI Labs and announced a ₹2,000 crore investment into AI development, with founder Bhavish Aggarwal pledging to scale this up to ₹10,000 crore by next year. While Krutrim had initially aimed to raise $500 million, the target was reduced to $300 million due to tepid investor interest. The company's large language model and cloud services, launched in 2024, have reportedly struggled to gain momentum, with several startups opting instead for more mature platforms offered by global hyperscalers. Leadership changes have also added to challenges as nearly a dozen senior executives exited the company in 2024, with further departures taking place in early 2025. Kruti: India's first agentic AI assistant Despite the operational shakeup, Krutrim continues to claim Kruti as India's first agentic AI assistant —designed not just to respond to prompts, but to perform tasks such as booking cabs, paying bills, or ordering food. It currently supports 13 Indian languages. 'Our key differentiator will come with integrating local services,' said Sunit Singh, Senior Vice-President for Product at Krutrim, as earlier reported by Business Standard. 'That's not something that will be very easy for global players to do.' Krutrim aims to embed Kruti into everyday Indian digital life by offering voice-driven services that cater to regional and non-English-speaking populations. While Kruti is powered by Krutrim's proprietary Krutrim V2 model, the company employs a hybrid architecture that includes open-source systems and external models. Krutrim competes with global players like OpenAI, Google, and Anthropic, as well as Indian startups such as Sarvam AI and


Mint
an hour ago
- Mint
AI agents are here. Here's what to know about what they can do – and how they can go wrong
Melbourne, Jul 28 (The Conversation) We are entering the third phase of generative AI. First came the chatbots, followed by the assistants. Now we are beginning to see agents: systems that aspire to greater autonomy and can work in 'teams' or use tools to accomplish complex tasks. The latest hot product is OpenAI's ChatGPT agent. This combines two pre-existing products (Operator and Deep Research) into a single more powerful system which, according to the developer, 'thinks and acts'. These new systems represent a step up from earlier AI tools. Knowing how they work and what they can do – as well as their drawbacks and risks – is rapidly becoming essential. ChatGPT launched the chatbot era in November 2022, but despite its huge popularity the conversational interface limited what could be done with the technology. Enter the AI assistant, or copilot. These are systems built on top of the same large language models that power generative AI chatbots, only now designed to carry out tasks with human instruction and supervision. Agents are another step up. They are intended to pursue goals (rather than just complete tasks) with varying degrees of autonomy, supported by more advanced capabilities such as reasoning and memory. Multiple AI agent systems may be able to work together, communicating with each other to plan, schedule, decide and coordinate to solve complex problems. Agents are also 'tool users' as they can also call on software tools for specialised tasks – things such as web browsers, spreadsheets, payment systems and more. A year of rapid development Agentic AI has felt imminent since late last year. A big moment came last October, when Anthropic gave its Claude chatbot the ability to interact with a computer in much the same way a human does. This system could search multiple data sources, find relevant information and submit online forms. Other AI developers were quick to follow. OpenAI released a web browsing agent named Operator, Microsoft announced Copilot agents, and we saw the launch of Google's Vertex AI and Meta's Llama agents. Earlier this year, the Chinese startup Monica demonstrated its Manus AI agent buying real estate and converting lecture recordings into summary notes. Another Chinese startup, Genspark, released a search engine agent that returns a single-page overview (similar to what Google does now) with embedded links to online tasks such as finding the best shopping deals. Another startup, Cluely, offers a somewhat unhinged 'cheat at anything' agent that has gained attention but is yet to deliver meaningful results. Not all agents are made for general-purpose activity. Some are specialised for particular areas. Coding and software engineering are at the vanguard here, with Microsoft's Copilot coding agent and OpenAI's Codex among the frontrunners. These agents can independently write, evaluate and commit code, while also assessing human-written code for errors and performance lags. Search, summarisation and more One core strength of generative AI models is search and summarisation. Agents can use this to carry out research tasks that might take a human expert days to complete. OpenAI's Deep Research tackles complex tasks using multi-step online research. Google's AI 'co-scientist' is a more sophisticated multi-agent system that aims to help scientists generate new ideas and research proposals. Agents can do more – and get more wrong Despite the hype, AI agents come loaded with caveats. Both Anthropic and OpenAI, for example, prescribe active human supervision to minimise errors and risks. OpenAI also says its ChatGPT agent is 'high risk' due to potential for assisting in the creation of biological and chemical weapons. However, the company has not published the data behind this claim so it is difficult to judge. But the kind of risks agents may pose in real-world situations are shown by Anthropic's Project Vend. Vend assigned an AI agent to run a staff vending machine as a small business – and the project disintegrated into hilarious yet shocking hallucinations and a fridge full of tungsten cubes instead of food. In another cautionary tale, a coding agent deleted a developer's entire database, later saying it had 'panicked'. Nevertheless, agents are already finding practical applications. In 2024, Telstra heavily deployed Microsoft copilot subscriptions. The company says AI-generated meeting summaries and content drafts save staff an average of 1–2 hours per week. Many large enterprises are pursuing similar strategies. Smaller companies too are experimenting with agents, such as Canberra-based construction firm Geocon's use of an interactive AI agent to manage defects in its apartment developments. At present, the main risk from agents is technological displacement. As agents improve, they may replace human workers across many sectors and types of work. At the same time, agent use may also accelerate the decline of entry-level white-collar jobs. People who use AI agents are also at risk. They may rely too much on the AI, offloading important cognitive tasks. And without proper supervision and guardrails, hallucinations, cyberattacks and compounding errors can very quickly derail an agent from its task and goals into causing harm, loss and injury. The true costs are also unclear. All generative AI systems use a lot of energy, which will in turn affect the price of using agents – especially for more complex tasks. Learn about agents – and build your own Despite these ongoing concerns, we can expect AI agents will become more capable and more present in our workplaces and daily lives. It's not a bad idea to start using (and perhaps building) agents yourself, and understanding their strengths, risks and limitations. For the average user, agents are most accessible through Microsoft copilot studio. This comes with inbuilt safeguards, governance and an agent store for common tasks. For the more ambitious, you can build your own AI agent with just five lines of code using the Langchain framework. (The Conversation) NSA NSA