AI skeptic and EM of an AI team: balancing two worlds and leading the team to success
My opinions are my own and do not represent GitHub or anyone associated with GitHub.
In late 2023, I had the opportunity to move into an engineering manager (EM) role and lead a team, something Iād been eager to do. As a Staff Engineer, Iād spent years noticing how few people who look like me were in those roles. I wanted, and still want, to lift as I climb. I didnāt choose the team or the problem space; I signed up to grow, lead, and learn.
It turned out my team would be building Copilot experiences across GitHub.com. I wonāt get into feature details here; I prefer to keep the team and their work out of the spotlight.
Why call myself an AI skeptic? A few reasons:
- Consent and licensing: The lack of meaningful consent in many training datasets bothers me. Questions about copyrighted works and code under various licenses donāt have clean answers. As of today, most open-source licenses donāt explicitly address training large language models, and reasonable people disagree about whether it aligns with the spirit of open source.
- Energy and cost: The compute and energy footprint of training and operating models is nontrivial, and we should take it seriously.
- One-size-fits-all thinking: Not every problem needs an LLM, yet the hype can make it feel like it does. When one technology crowds out others, we risk a kind of monochrome sameness.
- Market fatigue: The buzz is real, and so is the fatigue. Even users are telling us they donāt want everything have āAIā
So how do I lead an AI team with that mindset?
I start with honesty. In 1:1s, I share where Iām coming from. For some teammates, that opened the door to voice their own frustrations. For others, it became an invitation to change my mind. I canāt, and wonāt, pretend to be someone Iām not. The same transparency applies with product partners: I explain my perspective; they explain the vision and why it matters.
I also believe healthy tension between product and engineering is a feature, not a bug. Having someone in the room who needs convincing sharpens thinking. Inside our team, I get pitched constantly, and thatās great. It forces us to articulate user value, guardrails, and trade-offs.
Finally, skepticism fuels curiosity. When Iām unsure about something, I research it deeply. In a space where something new lands every week, doing the homework keeps me on my toes and helps the team make better calls.
Being an AI skeptic and leading an AI team arenāt mutually exclusive. The skepticism keeps us honest; the leadership keeps us moving.