logo
I'm a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here's how I got hired and why I love it.

I'm a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here's how I got hired and why I love it.

This is an as-told-to essay based on a conversation with Jigar Bhati, a member of technical staff at OpenAI. He's worked at the company since 2023 and previously worked as a software engineer at Twitter. This story has been edited for length and clarity.
I was working at Twitter for almost seven years and was looking for a change in general. Twitter was a big tech company, and it was handling millions of users already. I wanted to join a company that was a bit smaller scale and work in a startup-like environment.
When ChatGPT launched, software engineers were in awe. So I was already using it before joining OpenAI. I was fascinated by the product itself. There was definitely that connection with the product that, as a software engineer, you often don't feel.
I could also see there was a huge opportunity and huge impact to be created as part of joining OpenAI in general. And I think if I look back at the past two years of my journey, I think that's mostly true. It's been exciting seeing the user base grow over two years. I got to work and lead some critical projects, which was really great for my own professional development.
And you're always working with smart people. OpenAI hires probably the best talent out there. You're contributing while also learning from different people, so there's exponential growth in your professional career as well.
Building up the right work experience is key
My career at Twitter and before that had been focused on infrastructure capabilities and handling large-scale distributed systems, which was something the team was looking for when I interviewed with OpenAI.
When I joined OpenAI, there were a lot of challenges with respect to scaling infrastructure. So my expertise has helped steer some of those challenges in the right direction. Having that relevant experience helps when you're interviewing for any team. So one of the best things is to find the right team that has the closest match for you.
Having public references, for me, helped. I had some discussions and conference talks and met a lot of people as part of that conference networking. That also helps you stay on top of state-of-the-art technological advancements, which helps in whatever work stream you're working with.
I think OpenAI is probably one of the fastest-growing companies in the world, so you really need to move fast as part of that environment. You're shipping, or delivering, daily or weekly. And as part of that, I think you really need to have that cultural fit with the company. When you're interviewing, I think you need to be passionate about working at OpenAI and solving hard problems.
In general for professionals, it's important to have a good career trajectory so that they can showcase how they have solved real, hard problems at a large scale. I think that goes a long way.
Starting your career at a startup or a larger tech company are both fine. It's up to the students' interests and cultural fit as well. OpenAI is not just a place for experienced engineers. OpenAI also hires new grads and interns. I've definitely seen people entering as part of that pipeline. And it's amazing to see how they enjoy working at OpenAI and how they come up with new ideas and new capabilities and new suggestions.
But whichever place you end up in, I think it's important to have good growth aspects professionally and also for you to ship products. At any company you can create impact for the company and yourself, and be able to have that career trajectory.
OpenAI still feels like a startup
One of the most exciting things is that I think OpenAI still operates like it operated two years ago. It still feels like a startup, even though we may have scaled the number of engineers, number of products, the number of user bases.
It's still very much like a startup environment and there's a big push for productivity and collaboration. The velocity and the productivity you get working at OpenAI is definitely much higher than some of the other companies that I've worked with.
That makes things really exciting because you get to ship products on a continuous basis, and you get into that habit of shipping daily rather than weekly or monthly or yearly.
It feels great to be working in AI right now. In addition to having a connection with the product, it makes things very interesting when you are able to work from within the company, shaping the direction of a product that will be used by the entire world.
With GPT-5, for instance, we had early access to it and could steer the direction of its launch. Every engineer is empowered to provide feedback that will help improve the model, add new capabilities, and make it more usable and user-friendly for all the different 700 million users out there. I think that's pretty amazing and speaks to the kind of impact you can make as part of the company.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

This Year Will Be the Turning Point for AI College
This Year Will Be the Turning Point for AI College

Atlantic

time2 hours ago

  • Atlantic

This Year Will Be the Turning Point for AI College

A college senior returning to classes this fall has spent nearly their entire undergraduate career under the shadow—or in the embrace—of generative AI. ChatGPT first launched in November 2022, when that student was a freshman. As a department chair at Washington University in St. Louis, I witnessed the chaos it unleashed on campus. Students weren't sure what AI could do, or which uses were appropriate. Faculty were blindsided by how effectively ChatGPT could write papers and do homework. College, it seemed to those of us who teach it, was about to be transformed. But nobody thought it would happen this quickly. Three years later, the AI transformation is just about complete. By the spring of 2024, almost two-thirds of Harvard undergrads were drawing on the tool at least once a week. In a British survey of full-time undergraduates from December, 92 percent reported using AI in some fashion. Forty percent agreed that 'content created by generative AI would get a good grade in my subject,' and nearly one in five admitted that they've tested that idea directly, by using AI to complete their assignments. Such numbers will only rise in the year ahead. 'I cannot think that in this day and age that there is a student who is not using it,' Vasilis Theoharakis, a strategic-marketing professor at the Cranfield School of Management who has done research on AI in the classroom, told me. That's what I'm seeing in the classes that I teach and hearing from the students at my school: The technology is no longer just a curiosity or a way to cheat; it is a habit, as ubiquitous on campus as eating processed foods or scrolling social media. In the coming fall semester, this new reality will be undeniable. Higher education has been changed forever in the span of a single undergraduate career. 'It can pretty much do everything,' says Harrison Lieber, a WashU senior majoring in economics and computer science (who took a class I taught on AI last term). As a college student, he told me, he has mostly inhabited a world with ChatGPT. For those in his position, the many moral questions that AI provokes—for example, whether it is exploitative, or anti-intellectual, or ecologically unsound—take a back seat to the simple truth of its utility. Lieber characterized the matter as pragmatic above all else: Students don't want to cheat; they certainly don't want to erode the value of an education that may be costing them or their family a small fortune. But if you have seven assignments due in five days, and AI could speed up the work by tenfold for the cost of a large pizza, what are you meant to do? In spring 2023, I spoke with a WashU student whose paper had been flagged by one of the generally unreliable AI detectors that universities have used to stem the tide of cheating. He told me that he'd run his text through grammar-checking software and asked ChatGPT to improve some sentences, and that he'd done this to make time for other activities that he preferred. 'Sometimes I want to play basketball,' he said. 'Sometimes I want to work out.' His attitude might have been common among large-language-model users during that first, explosive year of AI college: If a computer helps me with my paper, then I'll have more time for other stuff. That appeal persists in 2025, but as these tools have taken over in the dorms, the motivations of their users have diversified. For Lieber, AI's allure seems more about the promise of achievement than efficiency. As with most students who are accepted to and graduate from an elite university, he and his classmates have been striving their whole life. As Lieber put it, if a course won't have 'a tangible impact on my ability to get a good job,' then 'it's not worth putting a lot of my time into.' This approach to education, coupled with a ' dismal ' outlook for postgraduate employment, justifies an ever more ferocious focus on accomplishment. Lieber is pursuing a minor in film and media studies. He has also started a profitable business while in school. Still, he had to network hard to land a good job after graduation. (He is working in risk management.) Da'Juantay Wynter, another rising senior at WashU who has never seen a full semester without AI, told me he always writes his own essays but feels okay about using ChatGPT to summarize readings, especially if he is in a rush. And like the other students I spoke with, he's often in a rush. Wynter is a double major in educational studies and American-culture studies; he has also served as president of the Association of Black Students, and been a member of a student union and various other campus committees. Those roles sometimes feel more urgent than his classwork, he explained. If he does not attend to them, events won't take place. 'I really want to polish up all my skills and intellect during college,' he said. Even as he knows that AI can't do the work as well, or in a way that will help him learn, 'it's always in the back of my mind: Well, AI can get this done in five seconds.' Another member of his class, Omar Abdelmoity, serves on the university's Academic Integrity Board, the body that adjudicates cases of cheating, with AI or otherwise. In almost every case of AI cheating he's seen, Abdelmoity told me, students really did have the time to write the paper in question—they just got stressed or preoccupied by other things, and turned to AI because it works and it is available. Students also feel the strain of soaring expectations. For those who want to go to medical school, as Abdelmoity does, even getting a 4.0 GPA and solid MCAT scores can seem insufficient for admission to the best programs. Whether or not this is realistic, students have internalized the message that they should be racking up more achievements and experience: putting in clinical hours, publishing research papers, and leading clubs, for example. In response, they seek ways to 'time shift,' Abdelmoity said, so they can fit more in. And that's at an elite private university, he continued, where the pressure is high but so is the privilege. At a state school, a student might be more likely to work multiple jobs and take care of their family. Those ordinary demands may encourage AI use even more. In the end, Abdelmoity said, academic-integrity boards such as the one he sits on can only do so much. For students who have access to AI, an education is what you make of it. If the AI takeover of higher ed is nearly complete, plenty of professors are oblivious. It isn't that they fail to understand the nature of the threat to classroom practice. But my recent interviews with colleagues have led me to believe that, on the whole, faculty simply fail to grasp the immediacy of the problem. Many seem unaware of how utterly normal AI has become for students. For them, the coming year could provide a painful revelation. Some professors I spoke with have been taking modest steps in self-defense: They're abandoning online and take-home assignments, hoping to retain the purity of their coursework. Kerri Tobin, an associate professor of education at Louisiana State University, told me that she is making undergrads do a lot more handwritten, in-class writing—a sentiment I heard many times this summer. The in-class exam, and its associated blue book, is also on the rise. And Abdelmoity reported that the grading in his natural-science courses has already been rejiggered, deemphasizing homework and making tests count for more. These adjustments might be helpful, but they also risk alienating students. Being forced to write out essays in longhand could make college feel even more old-fashioned than it did before, and less connected to contemporary life. Other professors believe that moral appeals may still have teeth. Annabel Rothschild, an assistant professor of computer science at Bard College, said she's found that blanket rules and prohibitions have been less effective than a personal address and appeal to social responsibility. Rothschild is particularly concerned about the environmental harms of AI, and she reports that students have responded to discussions about those risks. The fact that she's a scientist who understands the technology gives her message greater credibility. It also helps that she teaches at a small college with a focus on the arts. Today's seniors entered college at the tail end of the coronavirus pandemic, a crisis that once seemed likely to produce its own transformation of higher ed. The sudden switch to Zoom classes in 2020 revealed, over time, just how outmoded the standard lecture had become; it also showed that, if forced by circumstance, colleges could turn on a dime. But COVID led to little lasting change in the college classroom. Some of the students I spoke with said the response to AI has been meager too. They wondered why faculty weren't doing more to adjust teaching practices to match the fundamental changes wrought by new technologies—and potentially improve the learning experience in the process. Lieber said that he wants to learn to make arguments and communicate complex ideas, as he does in his film minor. But he also wonders why more courses can't assess those skills through classroom discussion (which is hard to fake) instead of written essays or research papers (which may be completed with AI). 'People go to a discussion-based class, and 80 percent of the class doesn't participate in discussion,' he said. The truth is that many professors would like to make this change but simply can't. A lot of us might want to judge students on the merits of their participation in class, but we've been discouraged from doing so out of fear that such evaluations will be deemed arbitrary and inequitable —and that students and their parents might complain. When professors take class participation into account, they do so carefully: Students tend to be graded on whether they show up or on the number of times they speak in class, rather than the quality of what they say. Erin McGlothlin, the vice dean of undergraduate affairs in WashU's College of Arts & Sciences, told me this stems from the belief that grading rubrics should be crystal clear in spelling out how class discussion is evaluated. For professors, this approach avoids the risk of any conflicts related to accommodating students' mental health or politics, or to bureaucratic matters. But it also makes the modern classroom more vulnerable to the incursion of AI. If what a student says in person can't be assessed rigorously, then what they type on their computer—perhaps with automated help—will matter all the more. Like the other members of his class, Lieber did experience a bit of college life before ChatGPT appeared. Even then, he said, at the very start of his freshman year, he felt alienated from some of his introductory classes. 'I would think to myself, What the hell am I doing, sitting watching this professor give the same lecture that he has given every year for the last 30 years? ' But he knew the answer even then: He was there to subsidize that professor's research. At America's research universities, teaching is a secondary job activity, at times neglected by faculty who want to devote as much time as possible to writing grants, running labs, and publishing academic papers. The classroom experience was suffering even before AI came onto the scene. Now professors face their own temptations from AI, which can enable them to get more work done, and faster, just as it does for students. I've heard from colleagues who admit to using AI-generated recommendation letters and course syllabi. Others clearly use AI to write up their research. And still more are eager to discuss the wholesome-seeming ways they have been putting the technology to use—by simulating interactions with historical authors, for example, or launching minors in applied AI. But students seem to want a deeper sort of classroom innovation. They're not looking for gimmicks—such as courses that use AI only to make boring topics seem more current. Students like Lieber, who sees his college education as a means of setting himself up for his career, are demanding something more. Instead of being required to take tests and write in-class essays, they want to do more project-based learning—with assignments that 'emulate the real world,' as Lieber put it. But designing courses of this kind, which resist AI shortcuts, would require professors to undertake new and time-consuming labor themselves. That assignment comes at the worst possible time. Universities have been under systematic attack since President Donald Trump took office in January. Funding for research has been cut, canceled, disrupted, or stymied for months. Labs have laid off workers. Degree programs have cut doctoral admissions. Multi-center research projects have been put on hold. The ' college experience ' that Americans have pursued for generations may soon be over. The existence of these stressors puts higher ed at greater risk from AI. Now professors find themselves with even more demands than they anticipated and fewer ways to get them done. The best, and perhaps the only, way out of AI's college takeover would be to embark on a redesign of classroom practice. But with so many other things to worry about, who has the time? In this way, professors face the same challenge as their students in the year ahead: A college education will be what they

The new ChatGPT has some AI fans rethinking when to expect ‘superintelligence'
The new ChatGPT has some AI fans rethinking when to expect ‘superintelligence'

Washington Post

time2 hours ago

  • Washington Post

The new ChatGPT has some AI fans rethinking when to expect ‘superintelligence'

SAN FRANCISCO — Anticipation built for months among tech workers and artificial intelligence enthusiasts ahead of OpenAI's next big upgrade to ChatGPT. The company's decision to christen the new system that would power the chatbot 'GPT-5' encouraged comparisons with its release of GPT-4 in 2023, which stunned the tech world and set ChatGPT on course to win its current 700 million weekly users.

Criminals, good guys and foreign spies: Hackers everywhere are using AI now
Criminals, good guys and foreign spies: Hackers everywhere are using AI now

NBC News

time4 hours ago

  • NBC News

Criminals, good guys and foreign spies: Hackers everywhere are using AI now

This summer, Russia's hackers put a new twist on the barrage of phishing emails sent to Ukrainians. The hackers included an attachment containing an artificial intelligence program. If installed, it would automatically search the victims' computers for sensitive files to send back to Moscow. That campaign, detailed in July in technical reports from the Ukrainian government and several cybersecurity companies, is the first known instance of Russian intelligence being caught building malicious code with large language models (LLMs), the type of AI chatbots that have become ubiquitous in corporate culture. Those Russian spies are not alone. In recent months, hackers of seemingly every stripe — cybercriminals, spies, researchers and corporate defenders alike — have started including AI tools into their work. LLMs, like ChatGPT, are still error-prone. But they have become remarkably adept at processing language instructions and at translating plain language into computer code, or identifying and summarizing documents. The technology has so far not revolutionized hacking by turning complete novices into experts, nor has it allowed would-be cyberterrorists to shut down the electric grid. But it's making skilled hackers better and faster. Cybersecurity firms and researchers are using AI now, too — feeding into an escalating cat-and-mouse game between offensive hackers who find and exploit software flaws and the defenders who try to fix them first. 'It's the beginning of the beginning. Maybe moving towards the middle of the beginning,' said Heather Adkins, Google's vice president of security engineering. In 2024, Adkins' team started on a project to use Google's LLM, Gemini, to hunt for important software vulnerabilities, or bugs, before criminal hackers could find them. Earlier this month, Adkins announced that her team had so far discovered at least 20 important overlooked bugs in commonly used software and alerted companies so they can fix them. That process is ongoing. None of the vulnerabilities have been shocking or something only a machine could have discovered, she said. But the process is simply faster with an AI. 'I haven't seen anybody find something novel,' she said. 'It's just kind of doing what we already know how to do. But that will advance.' Adam Meyers, a senior vice president at the cybersecurity company CrowdStrike, said that not only is his company using AI to help people who think they've been hacked, he sees increasing evidence of its use from the Chinese, Russian, Iranian and criminal hackers that his company tracks. 'The more advanced adversaries are using it to their advantage,' he said. 'We're seeing more and more of it every single day,' he told NBC News. The shift is only starting to catch up with hype that has permeated the cybersecurity and AI industries for years, especially since ChatGPT was introduced to the public in 2022. Those tools haven't always proved effective, and some cybersecurity researchers have complained about would-be hackers falling for fake vulnerability findings generated with AI. Scammers and social engineers — the people in hacking operations who pretend to be someone else, or who write convincing phishing emails — have been using LLMs to seem more convincing since at least 2024. But using AI to directly hack targets is only just starting to actually take off, said Will Pearce, the CEO of DreadNode, one of a handful of new security companies that specialize in hacking using LLMs. The reason, he said, is simple: The technology has finally started to catch up to expectations. 'The technology and the models are all really good at this point,' he said. Less than two years ago, automated AI hacking tools would need significant tinkering to do their job properly, but they are now far more adept, Pearce told NBC News. Another startup built to hack using AI, Xbow, made history in June by becoming the first AI to climb to the top of the HackerOne U.S. leaderboard, a live scoreboard of hackers around the world that since 2016 has kept tabs on the hackers identifying the most important vulnerabilities and giving them bragging rights. Last week, HackerOne added a new category for groups automating AI hacking tools to distinguish them from individual human researchers. Xbow still leads that. Hackers and cybersecurity professionals have not settled whether AI will ultimately help attackers or defenders more. But at the moment, defense appears to be winning. Alexei Bulazel, the senior cyber director at the White House National Security Council, said at a panel at the Def Con hacker conference in Las Vegas last week that the trend will hold, at least as long as the U.S. holds most of the world's most advanced tech companies. 'I very strongly believe that AI will be more advantageous for defenders than offense,' Bulazel said. He noted that hackers finding extremely disruptive flaws in a major U.S. tech company is rare, and that criminals often break into computers by finding small, overlooked flaws in smaller companies that don't have elite cybersecurity teams. AI is particularly helpful in discovering those bugs before criminals do, he said. 'The types of things that AI is better at — identifying vulnerabilities in a low cost, easy way — really democratizes access to vulnerability information,' Bulazel said. That trend may not hold as the technology evolves, however. One reason is that there is so far no free-to-use automatic hacking tool, or penetration tester, that incorporates AI. Such tools are already widely available online, nominally as programs that test for flaws in practices used by criminal hackers. If one incorporates an advanced LLM and it becomes freely available, it likely will mean open season on smaller companies' programs, Google's Adkins said. 'I think it's also reasonable to assume that at some point someone will release [such a tool],' she said. 'That's the point at which I think it becomes a little dangerous.' Meyers, of CrowdStrike, said that the rise of agentic AI — tools that conduct more complex tasks, like both writing and sending emails or executing code that programs — could prove a major cybersecurity risk. 'Agentic AI is really AI that can take action on your behalf, right? That will become the next insider threat, because, as organizations have these agentic AI deployed, they don't have built-in guardrails to stop somebody from abusing it,' he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store