I. What boards are actually approving
Walk into most board or investment review meetings today and AI features prominently in the materials. Productivity gains. Tool adoption rates. Headcount optimization. The numbers look compelling because, at the task level, they often are: AI-assisted developers complete pull requests in a fraction of their previous time, and onboarding cycles have compressed dramatically.
But the board is approving a production process it cannot audit. Roughly a quarter of the code shipping to customers across large software organizations is now generated, at least in part, by AI systems. The humans overseeing that output — developers, leads, architects — are working with review processes designed for a world where humans wrote every line. The bottleneck has migrated downstream, and most governance structures have not followed it.
This is not a technology problem. It is a literacy problem at the governance layer.
II. The productivity paradox your portfolio companies are living inside
Faros AI's landmark research on developer telemetry identified what it calls the "AI Productivity Paradox": developers using AI write more code and complete more tasks, but organizations are not seeing corresponding improvements in delivery velocity or business outcomes. The acceleration at the individual level does not automatically compound into organizational performance.
Research from DX analyzing over 67,000 developers found something more striking: companies deploying the same AI tools produced radically different outcomes. Some saw customer-facing incidents double. Others saw them fall fifty percent. The differentiating variable was not the model or the vendor. It was the quality of governance structure around deployment.
Organizations that were ready to quit their cloud or agile transformations are now giving up on AI transformation, too. Something fundamental must change.
— — Laura Tacho, CTO, DX
For a VC or board member evaluating a portfolio company's AI transformation: the adoption metric you are receiving is not the outcome metric you need. The question is not whether AI tools are deployed. It is whether the governance structure around them is calibrated to the new production reality they create.
III. The security gap that briefings cannot close
Snyk's enterprise security research documents a perception gap that should concern every board with technology exposure: C-suite executives are nearly five times more likely than application security professionals to describe AI coding tools as carrying no meaningful risk. Security professionals are three times more likely to call their organization's AI governance policies insufficient.
This is not a communication failure between layers of the organization. It is a structural consequence of experiential distance. Executives who have personally used these tools encounter their specific failure modes directly — the hallucinated dependency that does not exist, the authentication logic that appears sound but is not, the context-blind error that passes all automated tests. That experience produces a different category of question in a risk review. Executives who have not had those encounters are, by default, dependent on intermediaries to characterize the risk — and the Snyk data suggests that dependency is systematically producing underestimation.
IBM's 2025 Cost of Data Breach research found that employees using AI tools outside sanctioned channels added materially to breach costs, averaging hundreds of thousands of dollars per incident. Shadow AI is a direct consequence of organizations where governance does not keep pace with adoption.
IV. What separates the leaders generating extraordinary value
McKinsey's 2025 research on AI transformation identified a consistent pattern among the companies generating top-line growth and meaningful valuation premiums from AI. They are not the companies with the most AI tools deployed. They are the companies where senior leadership actively selects high-value workflows, assigns appropriate talent to AI-enabled priorities, and governs outcomes with measurable rigor.
IBM's CEO study found the same pattern: AI initiatives that drive real outcomes are top-down and specific, not bottom-up and diffuse. Organizations where leadership delegates the full shape of AI deployment to technical teams tend to produce impressive adoption numbers that rarely translate into enterprise value. PwC characterizes this as one of the most common and consequential mistakes companies make in AI transformation.
For boards and investors, this pattern has a direct implication: the governance question to ask of any portfolio company is not how many AI tools are in use. It is whether the leadership team has enough direct experience with those tools to govern them — to identify which workflows genuinely benefit, to understand where the risk is concentrated, and to ask the right questions when the review infrastructure fails to keep pace with AI-accelerated output.
V. The practical ask — not engineering, but engagement
The argument here is not that boards should become technical. It is more specific and more achievable: executives who have personally built even a modest application with an AI coding tool develop a category of judgment that is not available through any other pathway.
Erik Brynjolfsson's research at Stanford on human-AI collaboration is relevant here. Human-AI teams consistently outperform either humans or AI working independently — but only when the human participant has sufficient understanding of the AI system to direct it and evaluate its outputs critically. Passive consumption of AI outputs is not collaboration. It is delegation without accountability.
The tools that make this accessible are already in use across most organizations. Cursor, Claude Code, and GitHub Copilot are designed for exactly the kind of exploratory, natural-language-driven engagement that a non-engineer executive can do meaningfully in a few hours per week. The learning is not in the syntax. It is in the direct encounter with AI's gap between apparent competence and actual reliability.
The question before every board is not whether their portfolio company is using AI to build software. It already is. The question is whether leadership governing that process has enough direct experience with it to govern it well.
The governance question worth asking any management team: not "what tools are you using?" — but which members of your leadership have personally built something with those tools, and what did they learn when it broke?
