My opinions are my own and do not represent GitHub or anyone associated with GitHub.

In late 2023, I had the opportunity to move into an engineering manager (EM) role and lead a team, something I’d been eager to do. As a Staff Engineer, I’d spent years noticing how few people who look like me were in those roles. I wanted, and still want, to lift as I climb. I didn’t choose the team or the problem space; I signed up to grow, lead, and learn.

It turned out my team would be building Copilot experiences across GitHub.com. I won’t get into feature details here; I prefer to keep the team and their work out of the spotlight.

Why call myself an AI skeptic? A few reasons:

  • Consent and licensing: The lack of meaningful consent in many training datasets bothers me. Questions about copyrighted works and code under various licenses don’t have clean answers. As of today, most open-source licenses don’t explicitly address training large language models, and reasonable people disagree about whether it aligns with the spirit of open source.
  • Energy and cost: The compute and energy footprint of training and operating models is nontrivial, and we should take it seriously.
  • One-size-fits-all thinking: Not every problem needs an LLM, yet the hype can make it feel like it does. When one technology crowds out others, we risk a kind of monochrome sameness.
  • Market fatigue: The buzz is real, and so is the fatigue. Even users are telling us they don’t want everything have ā€œAIā€

So how do I lead an AI team with that mindset?

I start with honesty. In 1:1s, I share where I’m coming from. For some teammates, that opened the door to voice their own frustrations. For others, it became an invitation to change my mind. I can’t, and won’t, pretend to be someone I’m not. The same transparency applies with product partners: I explain my perspective; they explain the vision and why it matters.

I also believe healthy tension between product and engineering is a feature, not a bug. Having someone in the room who needs convincing sharpens thinking. Inside our team, I get pitched constantly, and that’s great. It forces us to articulate user value, guardrails, and trade-offs.

Finally, skepticism fuels curiosity. When I’m unsure about something, I research it deeply. In a space where something new lands every week, doing the homework keeps me on my toes and helps the team make better calls.

Being an AI skeptic and leading an AI team aren’t mutually exclusive. The skepticism keeps us honest; the leadership keeps us moving.