A former OpenAI engineer has offered a clear-eyed look at life inside the company, describing a culture shaped by speed, ambition, and pressure. Calvin French-Owen, who joined OpenAI in May 2024 and left in June 2025, spent his final months helping launch Codex, the company’s new coding agent. His blog post reveals how OpenAI has scaled quickly and what it looks like to build products at that pace.
Quick Expansion, Constant Shifts
In the span of a year, OpenAI’s headcount grew from around 1,000 to more than 3,000. That kind of growth created internal friction. Reporting lines changed often, decision-making moved fast, and product teams adapted as they went. Many employees shifted roles, while new hires filled leadership spots that didn’t exist months earlier.
The company has no internal email system. Everything runs through Slack. For some, that setup works fine. For others, it becomes a distraction unless they manage their notifications carefully.
Despite the size, teams often work like startups. Engineers take initiative without waiting for direction. Multiple groups sometimes build similar tools without coordination. In one example, French-Owen mentioned seeing several versions of the same library written by different teams.
Building Codex in Seven Weeks
Codex launched in early 2025. The agent was designed to write code, interact with repositories, and assist developers by producing pull requests. OpenAI built it in just seven weeks. The core team included eight engineers, four researchers, two designers, two go-to-market staff, and a product manager.
Work on the project stretched into late nights and early mornings. French-Owen returned early from paternity leave to join the effort. He described the experience as exhausting but rewarding. By the end, the tool had gone live and attracted users the moment it appeared in ChatGPT’s interface.
Codex was structured as an asynchronous agent. Developers could submit tasks and check results later. The product avoided real-time back-and-forth in favor of longer running jobs that produced output independently.
Fast Code, Rough Edges
OpenAI keeps its code in one giant monorepo, mostly written in Python. Some parts use Rust or Go, depending on the task. The repository contains code from experienced infrastructure engineers and newly hired researchers alike. Styles vary, and without strict guidelines, the result can be messy.
Testing is another challenge. Some test runs take over 30 minutes when GPU resources are involved. Breakages happen often, especially on the main development branch. Engineers spend time tracking down unexpected errors, particularly when models behave differently during large-scale training.
Codex itself demanded heavy GPU usage. One internal feature reportedly matched the GPU load of an entire commercial infrastructure stack.
Decisions Move Quickly
The company favors action over process. New directions often begin with a few people experimenting, and teams grow around those ideas. Approval doesn’t always come first. Teams shift focus rapidly depending on what looks promising.
In one case, the Codex team needed help from ChatGPT engineers. A request went out, and the next day, reinforcements arrived. There were no drawn-out planning meetings or formal reassignments. Work started immediately.
OpenAI’s leaders stay close to product development. Executives post in Slack, join internal discussions, and follow technical progress. It’s common for top-level staff to weigh in directly on engineering topics.
Safety Focused, But Quiet About It
OpenAI’s public image has drawn criticism, especially from those focused on AI safety. French-Owen said that inside the company, safety was a constant topic, though the focus stayed on near-term risks.
Employees worked on systems to catch hate speech, political manipulation, abuse, and prompt injection. Long-term dangers, like autonomous takeover or runaway intelligence, were studied but didn’t take center stage. Most of the work in this area stayed internal, and not much has been shared publicly.
The company maintains strict confidentiality. Most projects are split into separate Slack channels with limited access. Even within the company, teams do not always know what others are building. External leaks happen often, and product rumors sometimes break on social media before internal announcements are made.
ChatGPT and Consumer Scale
Codex benefited from OpenAI’s growing user base. As soon as it launched, users started testing it without any need for marketing. The tool appeared in ChatGPT’s sidebar, and traffic followed.
French-Owen compared this kind of launch to flipping a switch. For him, coming from a business-focused startup background, the consumer scale of ChatGPT was eye-opening. Codex handled real-world codebases, processed large volumes of requests, and returned usable output within short windows.
Internally, product performance was tracked using Git pull requests. In less than two months, Codex had produced more than 600,000 public PRs. The team estimated many more had been generated privately.
Company Culture, Then and Now
OpenAI has roots in research. Early on, it focused more on science than shipping products. That has shifted. Today, the company spans research, enterprise sales, consumer tools, and hardware. Different teams carry different priorities.
Despite the growth, OpenAI kept its promise to make powerful models available to the public. Users can try tools for free, without login. Startups can access APIs without long-term contracts. The company hasn’t locked its best systems behind paywalls or restricted them to select partners.
Twitter remains a major influence. Internally, employees monitor viral tweets, respond to public feedback, and adjust product decisions based on social response. A running joke inside the company refers to this as “operating on Twitter vibes.”
Final Thoughts from the Inside
Looking back, French-Owen described his year at OpenAI as one of the most productive chapters of his career. He learned how large models are trained, how to manage scaling Python codebases, and how to build something fast under pressure.
He left not because of problems, but to start something new. The experience gave him what he came for, insight into how models evolve, a chance to build with top talent, and an opportunity to ship a product with real reach.
His parting advice to other engineers: big AI labs are moving fast, and joining one could be the best way to understand what’s coming next.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: OpenAI Adds Visual Styles to ChatGPT, Works Quietly on New AI Browser ‘Aura’
