Strangelove-AI February 2, 2025

AI adoption and organizational resistance

AI adoption and organizational resistance

AI tool adoption in enterprises is outpacing the development of policies to govern their use. This creates friction: leadership pushes for integration while employees range from eager experimentation to active resistance. Organizational behavior research suggests that resistance patterns contain useful information about security vulnerabilities and adoption risks.

Skeptics as quality control

Employees who resist AI adoption — sometimes labeled “Cautious Chris” in organizational persona frameworks — are often treated as obstacles. This framing misses their function.

Skeptics tend toward high attention to detail and preference for established workflows. These traits slow integration but also catch compliance gaps and policy violations that enthusiasts overlook. The tradeoff: complete tool avoidance creates skill gaps as competitors adopt new capabilities.

Effective onboarding for skeptics uses role-specific training tied to their existing job functions. Low-risk use cases let them apply their scrutiny to the adoption process itself. When skeptics eventually adopt a tool, they have already stress-tested it against existing workflows.

Persona assignment in prompt engineering

LLMs respond differently depending on the framing of the request. Assigning a specific role to the model reduces output ambiguity. A prompt like “You are a copyeditor and your job is to correct spelling and grammar mistakes without changing the meaning of the text” constrains the model’s interpretation of the task.

This technique requires iteration. Users refine prompts by adding context, specifying output format, and providing examples of desired results. The SANS Institute documentation on prompt engineering describes this as guiding the AI’s output through explicit role assignment.

The same principle applies to understanding employee behavior. Different user archetypes interact with AI tools in predictable ways, and those patterns have security implications.

Enthusiast adoption and shadow AI

Early adopters — the “Trailblazing Tom” archetype — find novel applications and automate repetitive tasks. They also create shadow AI problems: tools deployed without IT approval, data shared with external services without review, workflows that bypass established controls.

Enthusiast behavior generates friction with colleagues who lack technical confidence. It also opens security gaps. A multilayered response involves governance policies, tool identification, and access controls. One practical approach: pair enthusiasts with skeptics during tool evaluation. The skeptic’s tendency to question assumptions checks the enthusiast’s tendency to prioritize speed over compliance.

Pattern recognition without ethical reasoning

AI systems process large datasets and detect statistical anomalies. In security applications, this means identifying potential threats, generating threat intelligence, and automating routine incident response. These capabilities operate through pattern matching.

AI lacks contextual understanding and cannot evaluate ethical implications. Automated systems flag anomalies; humans must interpret what those anomalies mean and decide how to respond. Security teams that defer entirely to AI-generated alerts lose the judgment layer that distinguishes false positives from genuine threats.

Data leakage in LLM workflows

LLMs become more useful with more context. Users naturally want to provide background information, previous documents, and specific details to improve output quality. This creates a data exposure problem.

Many public LLM services use submitted prompts as training data. When an enthusiast uploads meeting notes, draft reports, or customer information to improve a summary, that data may become part of the model’s training set. Once ingested, the data cannot be retrieved or deleted.

This risk is highest among power users who have learned that detailed prompts produce better results. Technical controls like data loss prevention filters help, but the core issue is user behavior. Policies must specify what data categories can enter external AI systems, and users must understand that prompt submissions are not private.

Measured adoption

The target state is selective tool adoption based on demonstrated value. This means evaluating each use case for benefits and risks before deployment, rather than maximizing the number of AI integrations.

Effective adoption draws on multiple perspectives: enthusiasts identify opportunities, skeptics identify risks, and pragmatists weigh tradeoffs. Organizations that optimize for adoption speed alone tend to accumulate security debt and compliance gaps that surface later.