Data Governance in the AI ERA

Microsoft Copilot did exactly what it was designed to do: surface relevant information based on user permissions. The problem? Organizations discovered their permissions were a mess. Documents thought to have been restricted were suddenly visible. Information silos that existed through obscurity rather than intentional design were exposed. The AI wasn't broken—it was simply revealing how broken our data governance had been all along.

For years, human analysts served as (un)intentional quality control filters. They knew which datasets were reliable, which reports contained errors, and which sources to trust. They were the bottlenecks that paradoxically protected organizations from their own data quality issues. Now, when AI systems process vast volumes of information without these implicit quality checks, they amplify every weakness in our data governance—treating questionable data with the same confidence as verified information.

The traditional response would be to impose more controls, build higher walls, create more elaborate approval processes. But as I've observed throughout my career, organizational energy flows like water—try to dam it completely, and it simply finds new paths around your barriers. The organizations that thrive in the AI era won't be those with the most restrictive governance, but those that channel data usage toward productive outcomes.

The old model of governance focused on structured data in warehouses and lakes—clean, organized, controlled. But AI's ability to interpret PDFs, images, emails, and documents means governance can no longer stop at the warehouse door. Every document in your SharePoint, every image in your drives, every email in your system is now potentially active data. The boundary has dissolved.

Yet more governance isn't the answer—different governance is. At my prior employer, we often improved data quality by collecting less data but maintaining it better. This principle becomes even more critical when AI can process everything. Quality over quantity isn't just a nice philosophy; it's a survival strategy when poorly articulated questions applied to unreliable data can produce convincing but dangerously wrong answers.

The path forward requires embedding data teams in actual work processes to identify where automated quality indicators could add value. Instead of mandating universal definitions that ignore local context, we need systems that capture and communicate the confidence level and limitations of different data sources. Even lower-quality data becomes useful when its relative confidence level is transparent to both humans and AI systems.

Most critically, organizations must develop what I call data articulation skills. The ability to craft precise questions, understand data limitations, and critically evaluate AI outputs becomes as important as traditional data literacy. Poor questions amplified by powerful AI tools don't produce insights—they produce sophisticated-looking mistakes at unprecedented scale.

For organizations like those Delineate serves—government agencies, nonprofits, healthcare providers—the stakes are particularly high. These sectors can't afford to let AI amplify existing data problems into decisions that affect vulnerable populations. But they also can't afford to miss the opportunities that well-governed AI-enabled data analysis provides.

The organizations that succeed will be those that see AI not as a threat to control but as a spotlight illuminating where better governance can create genuine value. After all, you can't fix problems you can't see. AI just turned on the lights.

Author - Derek Weinberg

Derek is a business intelligence leader and educator who helps organizations navigate the intersection of data, technology, and human systems. He currently serves as Strategic Business Intelligence Manager at The Heritage Group and teaches data communication at IU's Kelley School of Business.

Find more AI insights at “Fox-Current | Business, Technology, and Behavior”, Derek’s personal Substack exploring the intersections of business, technology, and human behavior to find unconventional solutions within conventional systems.