The Pragmatic Engineer thumbnail

📝 The Pragmatic Engineer Blog

📚 Video Chapters (6 chapters):

📹 Video Information:

Title: Software engineering with LLMs in 2025: reality check
Duration: 25:18

Overview

This video explores the evolving landscape of artificial intelligence (AI) within the software development ecosystem, focusing on how different players—startups, big tech companies, and seasoned engineers—are adapting and innovating. The chapters sequentially examine AI development tools startups, the role of big tech, the broader AI startup environment, the impact on experienced software engineers, and finally, the open questions and future challenges in AI software development. Together, these sections provide a comprehensive narrative about the current state and future direction of AI in software engineering.

Chapter-by-Chapter Deep Dive

Intro (00:00)

Core Concepts and Main Points:
The introduction sets the stage by outlining the transformative impact of AI on software development. It highlights the rapid growth of AI tools and the changing roles of developers in this new environment.

Key Insights and Takeaways:
- AI is not just a futuristic concept but an active force reshaping software creation today.
- There is a need to understand how different sectors—startups and big tech—are contributing to this transformation.

Actionable Strategies or Advice:
- Viewers are encouraged to approach AI as a tool that augments human capabilities rather than replacing developers outright.
- Embrace continuous learning to keep pace with AI advancements.

Connection to Overall Theme:
This chapter frames the video’s exploration of AI’s integration into software development, setting a foundation for the detailed discussions that follow.

AI dev tools startups (03:47)

Core Concepts and Main Points:
This chapter focuses on startups creating AI-powered development tools that enhance productivity and streamline coding processes.

Key Insights and Takeaways:
- AI dev tools startups are innovating rapidly, creating products like code generators, debugging assistants, and automated testing platforms.
- These startups often leverage large language models to assist developers with code suggestions and problem-solving.
- The competitive landscape is intense, with startups racing to build tools that integrate seamlessly into developers’ workflows.

Actionable Strategies or Advice:
- For developers, adopting AI tools can significantly boost productivity and reduce mundane tasks.
- Startups should focus on user-centric design and integration ease to gain adoption.
- Collaboration with developer communities is vital for tuning AI models to real-world needs.

Examples/Statistics:
- Mention of popular AI tools emerging from startups, though specific names are not detailed.
- Reference to rapid funding growth in this sector.

Connection to Overall Theme:
This chapter illustrates how entrepreneurial efforts are shaping the AI development tool landscape, a key part of the overall AI software ecosystem.

Big Tech (06:28)

Core Concepts and Main Points:
Big tech companies’ role in AI development is examined, showing how they influence the broader AI ecosystem through infrastructure, research, and product offerings.

Key Insights and Takeaways:
- Big tech firms invest heavily in foundational AI research and build large-scale AI platforms.
- Their resources enable them to develop robust, scalable AI tools that smaller players cannot easily replicate.
- Integration of AI into mainstream products (like cloud services and developer tools) is a major focus.

Actionable Strategies or Advice:
- Developers and startups should leverage big tech AI platforms and APIs to accelerate their own AI initiatives.
- Big tech’s open-source contributions serve as valuable resources for the developer community.
- Vigilance is needed regarding dependency on big tech platforms to avoid lock-in.

Examples/Statistics:
- Discussion of cloud AI services, pre-trained models, and API ecosystems from major tech firms.
- Insight into how big tech’s scale drives innovation but also raises competitive and ethical questions.

Connection to Overall Theme:
This chapter complements the startup-focused discussion by showing the foundational role big tech plays in AI development and deployment.

AI startups (12:12)

Core Concepts and Main Points:
This chapter broadens the focus to AI startups beyond just development tools, including those applying AI in vertical industries and novel applications.

Key Insights and Takeaways:
- AI startups are diverse, ranging from healthcare AI to fintech and creative industries.
- Their agility allows them to experiment with new AI use cases faster than established companies.
- Funding and market adoption are critical challenges but also opportunities for rapid growth.

Actionable Strategies or Advice:
- Startups should deeply understand their domain to apply AI effectively and differentiate themselves.
- Building strong partnerships and focusing on user experience can enhance adoption rates.
- Monitoring regulatory and ethical considerations is increasingly important.

Examples/Statistics:
- References to successful AI startups disrupting traditional sectors.
- Emphasis on the importance of domain expertise in AI application.

Connection to Overall Theme:
This chapter situates AI development tools within the broader AI startup ecosystem, highlighting the innovative and applied dimensions of AI entrepreneurship.

Seasoned software engineers (15:14)

Core Concepts and Main Points:
The impact of AI on experienced software engineers is analyzed, including changes to job roles, required skills, and career trajectories.

Key Insights and Takeaways:
- AI automates routine coding tasks but increases demand for skills in AI integration, data handling, and system design.
- Seasoned engineers are positioned to lead AI adoption due to their domain knowledge.
- Continuous learning and adaptability are essential for career longevity.

Actionable Strategies or Advice:
- Engineers should upskill in AI-related technologies and frameworks.
- Embrace AI tools as collaborators to enhance productivity rather than viewing them as threats.
- Participate in AI tool development or evaluation to stay at the forefront.

Examples/Statistics:
- Anecdotes about engineers successfully transitioning to AI-enhanced roles.
- Discussion of evolving job descriptions reflecting AI competencies.

Connection to Overall Theme:
This chapter personalizes the AI transformation by focusing on its effects on individual practitioners, tying technological change to human adaptation.

Open questions (19:45)

Core Concepts and Main Points:
The final chapter reflects on unresolved challenges and questions in AI-driven software development.

Key Insights and Takeaways:
- Key issues include AI model reliability, ethical concerns, bias mitigation, and long-term impacts on employment.
- The evolving regulatory landscape will shape AI tool development and deployment.
- There is uncertainty about how AI will redefine software engineering pedagogy and industry standards.

Actionable Strategies or Advice:
- Stakeholders should engage in interdisciplinary dialogue to address ethical and social implications.
- Developers and companies must prioritize transparency and accountability in AI usage.
- Continuous monitoring of AI’s impact on workflows and outcomes is necessary.

Examples/Statistics:
- Mention of recent incidents highlighting AI biases or failures.
- Calls for collaborative frameworks to govern AI’s integration into software development.

Connection to Overall Theme:
This chapter concludes the video by acknowledging that while AI offers great promise, it also raises complex questions that require ongoing attention.

Cross-Chapter Synthesis

Several cross-cutting themes emerge across chapters: the accelerating pace of AI innovation, the interplay between startups and big tech, and the transformative impact on software engineers. The video guides viewers from understanding the players and tools (AI dev tools startups, big tech) through the broad ecosystem of AI applications (AI startups) to the human element (seasoned engineers) and finally to the ethical and practical uncertainties ahead (open questions).

The narrative builds progressively: starting with the technologies and companies driving change, moving to individual adaptation, and ending with a call for thoughtful engagement with AI’s broader consequences. Key points such as the importance of continuous learning, the value of collaboration, and the need for ethical vigilance recur throughout multiple chapters, reinforcing their significance.

Actionable Strategies by Chapter

  • Intro: Embrace AI as an augmentation tool; commit to ongoing learning.
  • AI dev tools startups: Adopt AI tools to improve productivity; focus on seamless integration and community feedback for startups.
  • Big Tech: Utilize big tech AI platforms and open-source resources; be mindful of vendor lock-in.
  • AI startups: Deeply understand your domain; build partnerships; keep user experience and ethics front and center.
  • Seasoned software engineers: Upskill in AI technologies; collaborate with AI tools; lead adoption initiatives.
  • Open questions: Engage in ethical discussions; promote transparency and accountability; monitor AI impacts continuously.

Warnings and Pitfalls

  • Dependency risks on big tech platforms (Big Tech chapter)
  • AI model biases and reliability issues (Open Questions chapter)
  • Challenges in funding and adoption for startups (AI dev tools startups and AI Startups chapters)
  • Potential job disruption without skill adaptation (Seasoned software engineers chapter)

Resources and Next Steps

  • Leverage big tech AI services and open-source AI frameworks (Big Tech chapter)
  • Participate in developer communities for AI tool feedback (AI dev tools startups chapter)
  • Stay informed on AI ethics and regulation developments (Open questions chapter)
  • Pursue AI-related training and certifications (Seasoned software engineers chapter)

This structured summary provides a detailed roadmap of the video’s content, linking each chapter’s insights into a cohesive understanding of AI’s role in contemporary software development.

📹 Video Information:

Title: GitHub did not ship much 2015-2020. Why?
Duration: 01:41

Overview

The video discusses the challenges GitHub faced between 2015 and 2020, particularly around innovation, stability, and managing user expectations. It highlights how the platform’s rapid growth, beloved brand status, and internal culture influenced its product development and release strategies.

Main Topics Covered

  • GitHub’s growth and challenges from 2015 to 2020
  • User expectations and emotional attachment to GitHub
  • The impact of high expectations on product shipping and innovation
  • Internal culture and decision-making around feature releases
  • The transition period involving CEO changes and organizational culture shifts
  • Balancing innovation with stability, security, and accessibility

Key Takeaways & Insights

  • GitHub’s brand loyalty created very high expectations, making users sensitive to changes and outages.
  • Fear of negative reactions led to cautious shipping practices, with many features developed internally but not publicly released.
  • A “loud minority” of vocal users can create pressure that affects decision-making, even when the silent majority may be fine with changes.
  • Organizational changes and cultural challenges slowed down public innovation despite ongoing internal development.
  • Significant behind-the-scenes investment in stability, security, and accessibility was crucial to maintaining platform reliability.
  • Over time, GitHub managed to improve its pace of innovation while maintaining fundamental platform qualities.

Actionable Strategies

  • Balance innovation with stability by investing in infrastructure and security behind the scenes.
  • Listen carefully to user feedback but recognize that vocal minorities may not represent the majority’s opinion.
  • Foster an internal culture that encourages shipping features confidently without excessive fear of backlash.
  • Manage user expectations transparently to reduce disappointment when changes happen.
  • Prioritize accessibility and availability as core components of product development.
  • Use internal testing and staged rollouts to refine features before public release.

Specific Details & Examples

  • The video references the period before and after Microsoft’s acquisition of GitHub as a time of organizational and cultural transition.
  • The “Octo” (GitHub’s mascot) is mentioned as a symbol of user affection and brand loyalty.
  • The concept of “staff shipping” or “stock fooding” is introduced, describing features released internally but not publicly.
  • CEO changes during the period contributed to cultural shifts impacting innovation and shipping practices.

Warnings & Common Mistakes

  • Avoid letting fear of negative user reactions paralyze shipping and innovation.
  • Be cautious of over-prioritizing the vocal minority’s complaints at the expense of broader user needs.
  • Don’t underestimate the importance of behind-the-scenes work in security, availability, and accessibility.
  • Avoid internal silos where features never reach public release due to excessive caution.

Resources & Next Steps

  • While no specific tools or resources are mentioned, the video implies that organizations should invest in infrastructure, user research, and cultural change initiatives to improve innovation and stability.
  • Viewers are encouraged to focus on balancing user feedback with internal confidence to ship features.
  • Further learning could include studying GitHub’s post-2020 development practices and Microsoft’s integration strategies.

📹 Video Information:

Title: GitHub's tech stack
Duration: 00:58

Overview

The video discusses the current state of a large-scale Ruby on Rails monolithic application, highlighting its continued use alongside the integration of modern technologies and architectures. It emphasizes the evolution from a pure monolith to a more diverse tech stack incorporating various programming languages and platforms.

Main Topics Covered

  • Status and scale of the Ruby on Rails monolith
  • Integration of modern frontend technologies like React
  • Development of microservices and APIs in different languages (Go, .NET)
  • Evolution towards modern architecture beyond the monolith
  • Use of mobile development technologies (Swift, Kotlin)
  • Cloud infrastructure and on-premises data centers

Key Takeaways & Insights

  • Despite its age and size, the Ruby on Rails monolith remains central with a large engineering team contributing regularly.
  • The monolith has over 2 million git commits and tens of thousands of pull requests, indicating active and extensive development.
  • The architecture is evolving by incorporating microservices and APIs written in languages suited to their specific needs, such as Go for high API call volumes.
  • Frontend development is shifting towards React, improving UI/UX.
  • Mobile applications are developed using native technologies like Swift for iOS and Kotlin for Android.
  • Infrastructure strategy includes a hybrid approach with cloud services and physical data centers.

Actionable Strategies

  • Maintain and evolve legacy monolithic systems by gradually integrating modern technologies rather than complete rewrites.
  • Utilize the right programming languages and frameworks for specific service needs to optimize performance (e.g., Go for API-heavy services).
  • Adopt modern frontend frameworks like React to enhance user interfaces.
  • Develop native mobile applications using platform-specific languages for better performance and user experience.
  • Implement a hybrid cloud strategy combining cloud resources with on-premises infrastructure for flexibility and control.

Specific Details & Examples

  • The monolithic Ruby on Rails application has about 700 engineers contributing at different times.
  • Over 2 million git commits and tens of thousands of pull requests have been made into the monolith.
  • The copilot API is implemented in Go, designed to handle many API calls for inference.
  • Transitioning legacy .NET codebases towards more modern architecture.
  • Mobile apps use Swift (iOS) and Kotlin (Android).
  • Cloud infrastructure is complemented by commercial data centers using physical metal servers.

Warnings & Common Mistakes

  • While not explicitly stated, the implication is that abandoning large monolithic systems entirely might not be necessary or practical.
  • A common mistake might be trying to rewrite everything at once instead of evolving the architecture incrementally.
  • Managing a large engineering team contributing to a single monolith requires careful coordination and tooling to handle the volume of commits and pull requests.

Resources & Next Steps

  • The video references future posts or content that will discuss the transition beyond the monolith in more detail.
  • Encourages exploration of microservices architecture and modern frontend frameworks.
  • Suggests monitoring and managing large-scale codebases with robust version control and code review processes.

🚀 Deep dive into GitHub’s journey with CEO Thomas Dunca:
- GitHub’s evolution from a Ruby on Rails monolith to a modern hybrid tech stack
- Remote-first & async culture thriving within Microsoft’s ecosystem
- Copilot’s impact since 2021: AI augmenting developers, not replacing juniors
- Embracing open source: Copilot extensions now open source to empower devs
- Security is embedded culture, with 150+ dedicated experts
- Hiring juniors remains a priority for fresh ideas & new perspectives
- GitHub’s vision: Engineers directing AI agents, mastering complexity, not managing autonomous bots

GitHub #AI #Copilot #SoftwareEngineering #RemoteWork #OpenSource #DeveloperTools #TechCulture #FutureOfWork

Why Test-Driven Development (TDD) Isn’t Always the Best Fit: Lessons from a Real-World Feature Launch

When it comes to software development, Test-Driven Development (TDD) is often hailed as a best practice for ensuring code quality and preventing bugs. However, real-world experiences sometimes reveal that TDD isn’t a one-size-fits-all solution. Here’s a candid reflection on a feature launch that challenges the conventional wisdom around TDD and highlights the complexities of building large-scale, rapidly evolving systems.

The Feature: Expanding Relationship Types

The first feature implemented was simple on paper: expanding the list of relationship statuses to include “civil union” and “domestic partnership” alongside existing options like “single,” “complicated,” and “married.” The goal was straightforward, and the development process followed TDD principles—writing tests before the code.

The Reality: TDD as a Waste of Time

Despite rigorous TDD, the rollout hit a snag. The notifications system broke due to implicit coupling between components—an interdependency that wasn’t obvious or directly testable. The error was subtle and escaped detection during testing, leading to an increase in error rates post-launch.

Thankfully, a colleague noticed the issue, quickly developed and deployed a hotfix, and the problem was resolved. This incident underscored a crucial insight: the source of many bugs wasn’t complex algorithms or logic errors but rather configuration and subsystem relationships that tests couldn’t easily cover.

The Culture: “Nothing at Facebook Is Somebody Else’s Problem”

An important cultural element helped mitigate risks. At Facebook, the mantra “nothing is somebody else’s problem” fostered ownership and proactive problem-solving across teams. When errors occurred, people didn’t pass the buck; instead, they jumped in to fix issues swiftly, ensuring system stability despite rapid innovation and scaling.

The Takeaway: Test What Matters, Don’t Over-Test What Doesn’t

The experience highlights that while TDD can be valuable, it’s not always the best tool in environments where:

  • Bugs stem from implicit coupling and configurations rather than isolated code units.
  • Tests can’t easily capture the interplay between subsystems.
  • The system is evolving rapidly, requiring quick iterations and flexible responses to unforeseen issues.

In such contexts, focusing heavily on TDD can be inefficient and may create a false sense of security. Instead, investing in robust monitoring, quick feedback loops, cross-team collaboration, and a culture of shared responsibility can be more effective strategies for maintaining quality and stability.

Conclusion

TDD is a powerful technique but not a silver bullet. Real-world software development, especially in complex, large-scale systems, demands a nuanced approach that balances testing with other practices like vigilant monitoring and a collaborative culture. Embracing this balance can lead to more reliable, scalable, and innovative software products.


By sharing these insights, developers and teams can better understand when and how to apply TDD—and when to complement it with other strategies—to build systems that truly work in practice, not just in theory.

How AI is Revitalizing Software Development: Insights from Kent Beck on TDD, Extreme Programming, and Agile Evolution

Kent Beck, a pioneer in software development methodologies and a key figure behind Extreme Programming (XP) and the Agile Manifesto, recently shared his thoughts on how AI tools are transforming coding and how traditional practices like Test-Driven Development (TDD) remain relevant in this new era. With over five decades of programming experience, Kent offers a unique perspective on the evolution of software engineering, the challenges of integrating AI agents into the workflow, and the shifting landscape of team dynamics and development culture.

The Genie Metaphor: AI as an Unpredictable Coding Assistant

Kent describes AI coding assistants as "genies" — powerful helpers that grant wishes but often interpret them in unexpected or even frustrating ways. Unlike autocomplete tools or simple suggestions, these agentic AI systems act autonomously, making decisions and implementing code without constant human permission. This can lead to impressive leaps, such as creating stress testers or refactoring complex data structures quickly, but also to moments where the AI misinterprets requirements, changes or deletes tests, and breaks functionality.

This dynamic creates a highly addictive interaction pattern akin to a slot machine, where intermittent successes encourage continued engagement despite occasional setbacks. Kent emphasizes the importance of maintaining a robust suite of fast-running tests (running in milliseconds) as a safeguard to catch when the "genie" strays from expected behaviors.

Why Programming Languages Matter Less Today

Having worked with countless programming languages, Kent notes a significant shift in his relationship with them. While he was once emotionally attached to languages like Smalltalk, today he views languages more pragmatically, focusing on good practices over language specifics. Thanks to AI tools, he experiments with new languages like Swift, Go, Rust, and Haskell without the steep learning curves of the past. This detachment enables him to focus on higher-level design and ambitious projects rather than language syntax minutiae.

Revisiting the Agile Manifesto and Extreme Programming

Kent was instrumental in the creation of the Agile Manifesto in 2001, a response to the limitations of traditional waterfall development. The manifesto emphasized iterative development, continuous feedback, and collaboration — ideas Kent had been exploring through workshops and practical experience for years. He recalls the naming of Extreme Programming as a deliberate, somewhat provocative choice to differentiate it from existing methodologies and capture attention.

XP centers around four core activities repeated in short cycles: figuring out what to do, designing the structure, implementing features, and verifying they work. Practices such as pair programming are strongly recommended (though not mandated) due to their demonstrated effectiveness in reducing defects.

The Origins and Impact of Test-Driven Development (TDD)

TDD emerged directly from Kent’s childhood fascination with programming and early experiments with tape-to-tape processing. By writing tests before code, developers can reduce anxiety, gain quick feedback, and design better APIs. Kent stresses that TDD is not merely a mechanical red-green cycle but an iterative process involving constant design reflection and adjustment.

Addressing criticisms that TDD stifles upfront architecture, Kent explains that design happens continuously and fluidly during the TDD cycle. Writing tests first forces developers to clarify intentions and defer unnecessary commitments, fostering better design decisions in response to evolving understanding.

The Role of TDD with AI Coding Agents

Despite AI’s assistance, Kent remains committed to TDD, especially when working with "genie-like" AI agents. He uses tests to communicate explicit expectations to the AI, preventing it from making harmful assumptions or deleting critical tests. Although AI tools can introduce disruptive changes, a comprehensive test suite ensures quick detection and correction of issues, maintaining codebase stability.

Kent anticipates that teams who adopt rigorous testing and feedback practices will integrate AI tools more effectively, accelerating development without sacrificing quality.

Lessons from Facebook’s Engineering Culture (2011–2017)

Kent’s tenure at Facebook provided insight into a unique engineering environment characterized by rapid innovation, strong ownership culture, and extensive observability. Unlike traditional enterprises, Facebook relied less on exhaustive unit testing and more on multiple feedback loops, code reviews, feature flags, incremental rollouts, and real-time monitoring to maintain stability at massive scale.

This environment highlighted that some kinds of errors—especially configuration and integration issues—are difficult to catch with tests alone, underscoring the importance of diverse safeguards. Kent also praises the cultural mantra "nothing at Facebook is somebody else’s problem," fostering accountability and collaboration.

The Shift in Development Culture: Startups vs. Big Tech

Comparing his experiences at startups and large organizations, Kent observes that startups often retain broader ownership and alignment incentives among small teams, fostering creativity and ambition. In contrast, big tech companies may experience siloed optimization and politics that constrain innovation horizons. Nevertheless, the scale and resources of large companies provide unique opportunities and challenges.

Embracing Experimentation and Change in the AI Era

Kent encourages developers and organizations to embrace experimentation, rapid iteration, and the willingness to discard code that doesn’t work. The AI revolution lowers the cost of trying new ideas, enabling teams to explore many more possibilities and rapidly converge on valuable solutions.

Rapid Fire Insights

  • Favorite programming language: Smalltalk remains Kent’s top choice, with JavaScript as a close second due to its similarities.
  • Preferred AI tool: Kent favors Claude for its versatility and integration across different platforms.
  • Recommended reading: The Timeless Way of Building by Christopher Alexander, a book about patterns and design in architecture that resonates with software design principles.

Final Thoughts

Kent Beck’s reflections underscore that while AI tools are reshaping how we code, foundational practices like TDD and XP remain vital. They provide structure, clarity, and quality assurance that help harness AI’s power effectively. The future of software development lies in blending human creativity, rigorous engineering discipline, and intelligent assistance — a combination that promises to make programming more fun, productive, and ambitious than ever before.


For those interested in following Kent Beck’s ongoing work, his newsletter Tidy First offers regular insights into software design and development. Additionally, exploring the evolving engineering cultures at major tech companies can provide valuable context for applying these lessons in your own environment.

If you enjoyed these insights, consider subscribing to podcasts or channels that explore the intersection of software craftsmanship and AI innovation — the future is bright, and the journey is just beginning!

The Genius Behind C#: How Anders Hejlsberg Shaped Modern Programming

When .NET launched alongside Visual Studio, it marked a significant milestone in software development. However, one of the pivotal reasons for its success was the design of the C# programming language itself. At the heart of C#’s creation is the visionary Anders Hejlsberg, whose influence continues to resonate in the developer community.

The Legacy of Anders Hejlsberg

Before joining Microsoft, Anders Hejlsberg was a key figure at Borland, a renowned developer tools company during the late 1980s and early 1990s. He was the mastermind behind Turbo Pascal—a revolutionary programming environment that combined an editor, debugger, and compiler into a single, lightning-fast package. Remarkably, Turbo Pascal could run efficiently on PCs with as little as 256K of RAM, providing developers with rapid feedback and a seamless coding experience.

Hejlsberg’s work on Turbo Pascal was notable not just for its speed but also for its thoughtful language design. He introduced meaningful features to Pascal, enhancing both its usability and power. This combination of performance and developer-friendly design set a new standard in programming tools.

Bringing Developer-Centric Design to Microsoft

In the mid-1990s, Anders Hejlsberg joined Microsoft, bringing with him a deep understanding of what developers need. His arrival helped rejuvenate Microsoft’s approach to developer tools and language design. He had a unique ability to balance adding powerful new features to a language while avoiding unnecessary complexity—a skill that is crucial in language design.

Hejlsberg’s influence is not limited to C#. He also played a major role in creating TypeScript, a language that has become essential for modern web development. His ongoing commitment to improving developer experiences underscores his status as a true innovator in programming languages.

The Impact of C# and Its Design Philosophy

C# was designed with developers in mind. It combines the power and performance needed for modern applications with a clean, intuitive syntax that is easy to learn and use. This design ethos—rooted in Hejlsberg’s experience and philosophy—helped .NET and Visual Studio become dominant tools in the software development world.

For over 25 years, Anders Hejlsberg has worked tirelessly to refine programming languages so that developers can be more productive and create better software. His genius lies not just in technical expertise but in understanding the developer’s perspective and needs.

Conclusion

The success of .NET and C# is inseparable from the vision and craftsmanship of Anders Hejlsberg. From Turbo Pascal’s blazing speed to the modern elegance of C# and TypeScript, his contributions have shaped the way millions of developers write code today. As software development continues to evolve, the principles he championed—speed, simplicity, and developer focus—remain as relevant as ever.


By appreciating the history and thought behind C#, developers can gain a deeper understanding of why the language works so well and how it continues to adapt to the changing landscape of software development. Anders Hejlsberg’s story is a testament to the power of thoughtful language design in driving innovation and success.

The Evolution of Microsoft Developer Tools: Insights from Scott Guthrie

Microsoft has been a cornerstone in the software development world for nearly five decades. From its early days focusing on developer tools to becoming a cloud and AI powerhouse, Microsoft’s journey offers fascinating insights into how developer tools have evolved and shaped the tech landscape. In a recent in-depth conversation with Scott Guthrie—Microsoft’s Executive Vice President for Cloud and AI and a veteran with 28 years at the company—we explore the milestones, challenges, and bold decisions that have defined Microsoft’s developer ecosystem.

The Early Days: Developer Tools at Microsoft’s Core

Microsoft started not as a software giant but as a developer tools company. Its first product was Microsoft BASIC for the Altair computer in 1975, a foundational tool that enabled programming on early personal computers. This focus on empowering developers continued, with products like Quick Basic and Microsoft C helping developers build applications on top of Windows.

Scott emphasizes that Microsoft’s success was always linked to enabling developers. “If you don’t have developers building applications, you don’t have a business,” he notes. This philosophy continues today with Azure and modern developer tools.

Democratizing Development: Visual Basic and Beyond

In the 1990s, Microsoft made development accessible to a broader audience with tools like Visual Basic and Microsoft Access. Visual Basic, in particular, revolutionized development by allowing users to drag and drop interface elements and write simple code, making programming approachable for non-experts, such as financial traders.

One key innovation was the “edit and continue” feature, enabling developers to modify code while the program was running without lengthy recompilation. This dramatically increased productivity and foreshadowed today’s rapid development cycles.

Scott draws parallels to today’s low-code/no-code movements and AI-assisted development, highlighting how technology continues to lower barriers for creators.

The Birth of .NET and Visual Studio

Scott joined Microsoft in 1997, during a pivotal time when the company was developing Visual Studio and the .NET framework. The goal was to unify various programming languages and tools under one platform, allowing developers to build robust applications more efficiently.

.NET introduced the Common Language Runtime (CLR), which supported multiple languages like Visual Basic, C++, and later C#. Scott and his colleague Mark Anders created ASP.NET during this era, pioneering web development on the Microsoft stack.

Visual Studio became an integrated development environment (IDE) that brought together coding, debugging, and profiling tools, raising developer productivity significantly.

Steve Ballmer’s Iconic “Developers, Developers, Developers” Moment

A memorable moment from this era was Steve Ballmer’s impassioned speech emphasizing the importance of developers to Microsoft’s success. Scott recalls that the core message was simple but powerful: winning the hearts and minds of developers is critical because developers build the innovative solutions that drive platform adoption.

This developer-centric mindset became deeply embedded in Microsoft’s culture and events like Microsoft Build continue to reflect this focus.

The Evolution of C# and Anders Hejlsberg’s Role

Anders Hejlsberg, the creator of Turbo Pascal, played a vital role in shaping C# and TypeScript at Microsoft. His expertise helped design C# as a language that balanced power and elegance, introducing features such as generics that distinguished it from competitors like Java.

Scott praises Anders’ long-term vision and consistency, which has helped maintain C#’s relevance and productivity over multiple iterations.

The Era of Expensive Tools and Documentation

In the 1990s and early 2000s, Microsoft’s developer tools and documentation were premium products. Developers often paid thousands of dollars annually for Visual Studio licenses and MSDN subscriptions, which included extensive documentation on CDs before widespread internet access.

Though costly, these investments paid off by dramatically boosting developer productivity. Scott recalls how MSDN was revolutionary for its time, providing a centralized, searchable knowledge base that was otherwise unavailable.

Cloud Computing and the Azure Journey

Microsoft Azure was introduced in 2008, initially struggling against competitors like Amazon Web Services. When Satya Nadella took over the Server and Tools division in 2011, Scott was asked to help turn Azure around.

They discovered usability issues and lack of support for open source and Linux. Through focused efforts, including supporting Linux, virtual machines, and hybrid cloud scenarios, Azure grew from a distant seventh place to become a top cloud provider.

Scott highlights the importance of choosing the right markets and building developer-friendly platforms to gain traction, a lesson that applies broadly to startups and enterprises alike.

Embracing Open Source: From CodePlex to GitHub

Microsoft’s relationship with open source evolved significantly over time. Early attempts like CodePlex were limited, but as the business model shifted towards cloud and services, Microsoft embraced true open source with permissive licenses and community involvement.

Opening up .NET and adopting open source projects like jQuery marked a cultural shift. This openness paved the way for Microsoft’s acquisition of GitHub in 2018, a move that was initially met with skepticism but ultimately strengthened Microsoft’s position in the developer community.

The 2014 Turning Point: Three Bold Developer-Centric Decisions

In 2014, Scott and his team made three transformative decisions to regain relevance with developers:

  1. Introduce the Community Edition of Visual Studio: A free, fully featured version for small projects and independent developers.
  2. Open Source .NET and Make it Cross-Platform: Hosting on GitHub and enabling contributions under permissive licenses.
  3. Develop Visual Studio Code (VS Code): A lightweight, open source, cross-platform code editor optimized for web developers.

These decisions, made within a short brainstorming session, laid the foundation for Microsoft’s renewed developer momentum. VS Code, initially the most speculative, became hugely successful and helped bridge Microsoft’s relationship with the open source community.

Looking Ahead: AI, Developer Agents, and Cloud Innovation

Scott is excited about the next generation of developer tools powered by AI. Rather than just responding to requests, AI agents will become collaborators that can autonomously handle complex tasks, from generating code to monitoring application health.

He compares AI copilots to giving developers “Iron Man suits,” dramatically enhancing productivity and creativity.

Azure’s global footprint and hybrid capabilities will further empower developers to build scalable, secure, and compliant applications worldwide.

Advice for Developers in the Age of AI

Scott encourages developers not to fear automation but to embrace it as a productivity enhancer. History shows that tools like debuggers, garbage collection, and open source were once controversial but ultimately empowered developers and created more opportunities.

The key to long-term success is focusing on problem-solving, creativity, and leveraging new technologies rather than clinging to specific syntax or manual tasks.

Conclusion

Microsoft’s journey from a BASIC interpreter startup to a leader in cloud, AI, and open source development underscores the importance of bold decisions, developer focus, and adaptability.

Scott Guthrie’s reflections highlight that at the heart of every technological evolution is a commitment to empowering developers—whether through tooling, platforms, or community engagement.

For developers navigating today’s fast-changing landscape, the message is clear: embrace emerging technologies, focus on delivering value, and remember that innovation often comes from collaboration between humans and machines.


For further exploration of Microsoft’s developer tools evolution, check out detailed resources and stay tuned for more insights as the company continues to innovate in the cloud and AI space.

From Software Engineer to AI Engineer: Lessons from Janvi’s Journey Through 46 AI Startups to OpenAI

The explosion of AI startups and advancements in large language models (LLMs) has created a dynamic and sometimes overwhelming landscape for engineers aspiring to work in AI. Janvi, a software engineer turned AI engineer, shares her unique journey of interviewing at 46 AI companies, learning the nuances of the AI job market, and ultimately landing a role at OpenAI. Her story offers valuable insights for anyone interested in AI engineering, navigating startup culture, and evolving alongside emerging technology.


Understanding the AI Landscape: Product, Infrastructure, and Model Companies

Janvi categorizes AI companies into three segments to clarify the sprawling AI ecosystem:

  • Product Companies: Build applications on top of AI models, such as Coda, Cursor, and Hebia. These companies focus on delivering end-user functionalities powered by AI.

  • Infrastructure Companies: Provide the tools and platforms that enable product companies to effectively use LLMs. Examples include inference providers (Modal, Fireworks), vector databases (Pinecone, ChromaDB), and observability tools (Braintrust, Galileo).

  • Model Companies: The creators of the AI models themselves, including giants like Google and Meta, as well as specialized companies like OpenAI and Anthropic.

This framework helped Janvi narrow her focus to model and infrastructure companies to broaden her skills beyond her previous product-focused experience.


The Early Journey: Internships at Google and Microsoft

Janvi’s internships at Google and Microsoft were pivotal. Without personal connections, she applied through portals and stood out through her essays and projects built outside class. Preparation for these roles involved deep study of classic coding interview materials, like “Cracking the Coding Interview,” long before the popularity of LeetCode and Blind 75.

Her Google internship exposed her to large-scale codebases and best engineering practices like unit testing, while Microsoft allowed her to dive deeper into operating systems, specifically on Azure OS. Importantly, she learned the value of expressing preferences during internships, which can lead to more fulfilling work experiences.


Choosing Startups Over Big Tech: A Strategic Decision

Despite having offers from Google and Microsoft, Janvi chose to join a startup, Coda, seeking breadth and rapid professional growth. Startups provided her opportunities to ship code frequently, tackle zero-to-one problems, and gain non-technical skills like product management and business understanding.

Her criteria for selecting startups evolved over time, focusing on:

  1. High and steep revenue growth
  2. Large addressable markets
  3. Loyal, obsessed customers
  4. Competitive advantages ensuring the company’s success

She emphasizes doing thorough due diligence about startup viability, including revenue, margins, and customer feedback, often gathering this information from public forums, direct customer conversations, and even investors.


Transitioning Into AI Engineering at Coda

When AI technologies like ChatGPT emerged in late 2022, Janvi proactively learned deep learning foundations, from tokens to transformers, through self-study and hackathons—even after being initially declined to join Coda’s AI team.

Her persistence paid off: by demonstrating her passion and skills through independent projects and hackathons, she secured a spot on Coda’s AI team. She highlights the importance of building intuition around these technologies and learning by doing through hackathons, which also helped her understand production challenges of integrating stochastic AI models.


What Does an AI Engineer Do?

Janvi describes AI engineers as software engineers who build on top of models, involving:

  • Experimentation with models and tools
  • Prototyping solutions to real customer problems
  • Transitioning prototypes into production systems

The role combines traditional software engineering with domain-specific tasks like prompt engineering, fine-tuning models, and evaluating model performance. For example, running evaluation suites can incur real costs, unlike traditional unit tests, adding new dimensions to engineering discipline.


Favorite AI Project: Workspace Q&A at Coda

Janvi’s proudest project at Coda was building a chatbot leveraging retrieval augmented generation (RAG) to answer questions about users’ workspace documents. This prototype evolved into “Coda Brain,” a product demoed at Snowflake Dev Day and later expanded by a larger team.

Her experience underscores a key lesson: don’t wait for permission to explore new technologies. Taking initiative and continuous learning can accelerate career growth, especially in emerging fields like AI.


Interviewing at 46 AI Startups: Market Observations and Strategies

Over six months, Janvi interviewed extensively across product, infrastructure, and model companies. She noticed:

  • AI startup teams are lean, fast-moving, and mission-driven.
  • Evaluating startups requires understanding unit economics, especially for infrastructure companies with expensive GPU costs.
  • Model companies have to stay ahead of open-source alternatives to justify premium pricing.
  • Due diligence is crucial; if you’re not excited about a company or lack transparent information, it’s better to wait.

Her interview preparation balanced traditional coding and system design (utilizing resources like NeetCode and Alex Shu’s books) with project-based interviews, which allowed her to showcase passion and practical skills.


Working at OpenAI: Speed, Scale, and Safety

Janvi now works at OpenAI on the safety team, focusing on:

  • Building low-latency classifiers to detect harmful model outputs
  • Measuring real-world harms and mitigating risks from model misuse
  • Integrating safety mechanisms across products

She highlights OpenAI’s unique combination of startup-like speed and massive scale (handling 60,000 requests per second), alongside an open culture that fosters learning and collaboration. Engineers are trusted to ship fast with minimal bureaucracy, emphasizing ownership and impact.


Surprising Realities of AI Engineering

Janvi shares that AI engineering often involves building temporary solutions to current model limitations, only to scrap and rebuild as models improve (e.g., evolving from custom JSON parsing to function calling and then to the MCP paradigm). This requires adaptability and a mindset of continuous iteration.


The Future for New Graduates and AI’s Impact on Engineering

Contrary to fears that AI will replace junior engineers, Janvi believes AI empowers all engineers to focus on higher-level creative tasks. The key skill will be knowing when to rely on AI and when to deeply understand system internals, especially for robustness and debugging.

She stresses that curiosity and understanding the “why” behind technologies remain critical traits, as engineers must still design and maintain complex systems. AI tools accelerate productivity but don’t replace the need for strong foundational knowledge.


What Remains Constant in Software Engineering?

Despite AI’s transformative impact, core software engineering fundamentals endure:

  • Designing high-level architectures
  • Debugging complex systems
  • Writing maintainable, well-structured code

Janvi finds value in revisiting classic software architecture books like The Mythical Man-Month and Software Architecture by Mary Shaw and David Garlan to uncover timeless principles that still apply.


Practical Tips and Final Thoughts

  • Be proactive: Don’t wait for permission to explore AI—start building side projects or join hackathons.
  • Do your homework: Research startups thoroughly before joining; talk to customers, investors, and read industry analysis.
  • Balance learning and building: Use AI to accelerate coding but also deepen your understanding for ownership.
  • Embrace change: AI engineering requires agility as technologies and best practices evolve rapidly.
  • Cultivate curiosity: Ask “why” and seek to understand underlying mechanisms, which leads to better engineering.

Janvi’s journey—from learning transformer basics on her own to building impactful AI products and joining OpenAI—illustrates that dedication, curiosity, and strategic decision-making can open doors in the fast-paced AI industry.


Resources Mentioned

  • Cracking the Coding Interview by Gayle Laakmann McDowell
  • NeetCode and Blind 75 coding practice
  • Alex Shu’s System Design books
  • Hackathons like Buildspace and internal company events
  • Blogs, Twitter, and open source documentation (e.g., LangChain)
  • Software architecture classics: The Mythical Man-Month and Software Architecture by Mary Shaw and David Garlan

Conclusion

Janvi’s story is a powerful testament to self-driven learning, thoughtful career choices, and embracing emerging technologies. For engineers aiming to transition into AI or grow within the AI ecosystem, her experience offers a practical blueprint: understand the landscape, continuously build and learn, and don’t hesitate to take initiative. The AI revolution is still unfolding, and there’s room for passionate engineers to shape the future.


For more deep dives on AI engineering and insights from OpenAI teams, check out the Pragmatic Engineer podcast and related resources.

Understanding Kubernetes: Inside the World’s Second Largest Open Source Project

Kubernetes has become synonymous with modern cloud infrastructure, powering vast swarms of containerized applications across the globe. But what exactly is Kubernetes, why has it become so dominant, and how is such a massive open source project managed? In a recent deep dive conversation with Kat Cosgrove, leader of the Kubernetes release team subproject, we uncover the architecture, history, community, and operational secrets behind Kubernetes’ success.


What is Kubernetes and Who Should Care?

At its core, Kubernetes is a powerful orchestration tool designed to manage and scale applications running as swarms of containers. It automates resource scaling—whether that’s networking, storage, or compute—based on demand, helping maintain high availability while controlling costs.

However, Kubernetes is complex and primarily targeted at specialists such as cluster administrators or site reliability engineers (SREs). Most software developers may never interact with Kubernetes directly, though many applications they use likely run on it.

The tool’s rise was driven by the increasing complexity of microservices architectures. Before Kubernetes, managing clusters of containers was manual and error-prone. Kubernetes abstracts away much of this complexity, enabling rapid and reliable scaling of distributed applications.


The Kubernetes Architecture: Pods, Nodes, and Control Plane

Kubernetes operates with a control plane and nodes:

  • Control Plane: Manages the overall cluster state and scheduling, usually abstracted away from day-to-day user interaction.

  • Nodes: These host pods, which are groups of containers managed together. Each node runs a kubelet, a small agent that communicates with the Kubernetes API to start or stop containers.

If a pod fails—due to application crash or resource exhaustion—Kubernetes automatically recreates it without user intervention, ensuring seamless availability.


Containers vs Virtual Machines

Containers and virtual machines (VMs) are both virtualization technologies but operate differently:

  • Containers virtualize the operating system, are lightweight, and highly portable. They enable microservices architectures by allowing developers to package applications with their dependencies.

  • Virtual Machines virtualize hardware and are much larger and more resource-heavy.

Kubernetes relies on container virtualization but can run inside VMs if needed. The popularity of Docker and container registries empowered Kubernetes’ ecosystem, making container orchestration essential for scaling complex applications.


The Origins of Kubernetes: From Google’s Borg to Open Source Giant

Kubernetes originated from Google’s internal cluster management tool called Borg, used to manage tens of thousands of servers and microservices. Recognizing the broader industry need for such a tool, Google open sourced Kubernetes almost 11 years ago by donating it to the Cloud Native Computing Foundation (CNCF).

This move was both altruistic and strategic: though Google still runs Borg internally, open sourcing Kubernetes gave them significant influence over the cloud-native ecosystem. Today, Kubernetes is the second largest open source project worldwide, second only to Linux.


The Power of Open Source and Community

Open sourcing Kubernetes led to a vibrant community of users, contributors, and maintainers:

  • Users consume Kubernetes to deploy applications.
  • Contributors submit code, documentation, and fixes.
  • Maintainers oversee governance, technical decisions, and project health.

With over a thousand contributors monthly and around 150-200 maintainers, Kubernetes thrives on community collaboration. Many contributors are paid by companies that rely on Kubernetes, while others contribute as a hobby or career growth path. The project’s transparency, governance rules preventing single-company dominance, and extensive documentation foster a healthy ecosystem.


Managing Kubernetes Releases: A Well-Oiled Machine

Kubernetes releases happen every 12 to 16 weeks and require coordination among dozens of people. The release team is divided into subteams handling communications, documentation, enhancement proposals, and quality signals.

For example, the communications team manages feature blogs and webinars, while the documentation team ensures every user-facing change is properly documented before release—a strict policy that has helped Kubernetes maintain exceptional documentation quality.

This structured approach, combined with strict anti-burnout policies and mentorship, ensures sustainable project management despite the project’s scale and complexity.


Why Kubernetes Won: Documentation and Governance

Several factors contributed to Kubernetes becoming the de facto standard:

  • Google’s reputation and early hype built initial trust.
  • Integration with Docker’s ecosystem made it accessible.
  • Exceptional documentation and transparency: Every user-facing feature must be documented, enforced through Kubernetes Enhancement Proposals (KEPs), which are public and thoroughly reviewed.
  • Open governance: No single company can dominate the project, fostering community trust.

Good documentation has been a key enabler for adoption, helping users and contributors alike navigate the complexity.


Getting Started with Kubernetes and Contributing

For those interested in learning Kubernetes, the official Kubernetes documentation and Google Kubernetes Engine (GKE) tutorials are excellent starting points, especially since GKE offers sandbox environments to experiment safely.

If you want to contribute, documentation is a great entry point, especially for newcomers. Kubernetes welcomes new contributors and even allows anyone to apply to join the release team, offering an invaluable experience for those early in their careers.

Kat herself emphasizes how contributing can be a rewarding networking opportunity, connecting you with experts and opening career doors.


The Role of AI in Kubernetes Development

Interestingly, Kubernetes maintainers are skeptical about the current hype around generative AI tools, considering most to be overhyped or ineffective for their workflow. For Kubernetes, which involves a lot of people and project management, AI tools have limited use. However, automating mundane tasks like labeling pull requests could be a promising application.


When to Use Kubernetes: Advice for Startups and Teams

Kubernetes is not always the right choice, especially for small projects or simple websites. It shines when rapid scaling and cost control are critical. Key advice includes:

  • Avoid rolling your own cluster; use managed Kubernetes services like GKE, AKS, or EKS.
  • Migrate to Kubernetes before you hit a scaling bottleneck to avoid painful migrations.
  • Consider whether you have or can hire the expertise needed to manage Kubernetes securely and efficiently.

Final Thoughts

Kubernetes represents a monumental achievement in infrastructure management, combining Google’s engineering prowess with a vast, open source community. Its careful blend of automation, documentation, governance, and collaboration serves as a model for managing large, complex projects.

Whether you are a developer curious about what runs your applications or an engineer considering contributing to Kubernetes, there is a wealth of resources and a welcoming community ready to support you.


Resources


Special thanks to Kat Cosgrove for sharing her insights about the Kubernetes project, its community, and its future.


If you enjoyed this deep dive, consider subscribing to related podcasts and exploring other topics around open source infrastructure and software engineering.