Genesis
The twenty-one discourse traps were not designed. They were discovered across three phases, each phase answering a different evidentiary question. The sequence is itself the methodological contribution.
The first eight traps emerged inductively from close reading of practitioner discourse (LinkedIn posts, conference talks, design community threads) during the evidence-gathering phase for the Proxy Seduction Framework mechanism paper. Practitioners kept making structurally similar moves: acknowledging the evaluative challenge AI creates, then immediately neutralizing their own insight through recognizable patterns.
Once the mechanism was articulated, it became possible to ask what other discursive moves the framework would predict. Six further traps were deduced from theory. All six were independently confirmed in evidence collected after the prediction. Three additional traps then surfaced through systematic re-analysis of accumulated evidence through the framework lens, and four more were predicted from cross-domain application to AI diagnostics in India, a domain the software evidence base would not have surfaced on its own.
Eight traps from practitioner discourse
Traps 1 to 8. Close reading of professional discussions across design, engineering, and executive commentary. Surfaced before the framework was fully articulated. Operational category: self-framing and temporal reasoning.
Six traps predicted and confirmed
Traps 9 to 14. Derived from the proxy seduction mechanism, then tested against new evidence. All six confirmed in discourse collected after the prediction was made. The prediction-confirmation structure grounds the framework's falsifiability claim.
Seven traps from re-analysis and domain extension
Traps 15 to 17 emerged from systematic re-analysis of accumulated evidence (legitimation circuits, performative constitution, domain expertise dismissal). Traps 18 to 21 were predicted from cross-domain application to AI diagnostics in India, each with an explicit portability prediction.
Draft Narrative
Opening: the recognition paradox. Practitioners see the problem clearly and treat the seeing as sufficient. A motivating example drawn from the evidence constellation, ideally one where the speaker names proxy-criterion decoupling explicitly and then reframes the observation into a managerial remedy that presupposes stable evaluators.
Literature gap. The AI adoption and implementation literature assumes practitioners either resist or embrace. Missing from that dichotomy: the patterned ways that practitioner discourse naturalizes transformative effects through sincere belief rather than strategic evasion. The Barnesian performativity literature names field-level constitution but not the cognitive micro-foundation. The professional sensemaking literature names interpretive repertoires but treats the evaluator as continuous across technology shifts.
Method: three-phase trap identification. Inductive coding of practitioner discourse, deductive prediction from the proxy seduction mechanism, analytical emergence from systematic re-examination, cross-domain portability testing. Transparency about the epistemological progression is itself a methodological contribution.
Findings 1: the twenty-one traps. Organized by operational category (self-framing, temporal reasoning, structural reasoning, field-level dynamics) with cross-cutting mode tags (strategic, constitutive, hybrid). Representative exhibits drawn from the evidence base. The categorical organization carries theoretical weight: traps at different levels operate through different mechanisms.
Findings 2: co-occurrence as compositional grammar. The cluster, not the individual trap, is the primary unit of analysis. Practitioners rarely deploy one trap in isolation. Recurring clusters suggest underlying discursive logics that organize the traps into coherent rhetorical moves. The co-occurrence matrix documents these patterns quantitatively across the cluster log.
Findings 3: recognition without reflexivity. When does recognition convert to action? Raad (anoma.ly) provides the counter-case: implementation cost as quality filter, effort substitution, craftsperson adverse selection. The conditions under which recognition produces structural adjustment (rather than further proxy elaboration) become the falsifiability frontier for the framework.
Findings 4: cross-domain portability. Fourteen of the seventeen original traps are classified as mechanism-level (expected to appear in any domain where evaluative capacity is at stake), one as domain-bound (specific to software engineering practice), two as uncertain. The four traps predicted from AI diagnostics in India are candidates for mechanism-level confirmation. The portability predictions are themselves testable: if a mechanism-level trap fails to appear in diagnostics, or a domain-bound trap does appear, the classification is wrong and the framework learns something.
Discussion. Implications for organizational evaluation under AI engagement. The diagnostic as fieldwork infrastructure. The mechanism-level versus domain-bound distinction as a contribution to qualitative generalizability methodology. Boundary conditions: the framework is silent on non-evaluative work, on contexts where proxy and criterion are tightly coupled by design, and on reversibility timelines.
Contributions
1. Three-phase epistemological progression as method
Inductive discovery, deductive prediction with confirmation, analytical emergence with cross-domain extension. The sequence provides a reproducible template for qualitative theory-building that treats prediction as a test of framework claims rather than a rhetorical flourish.
2. Co-occurrence clustering as compositional grammar
Shifts the unit of analysis from the individual discourse move to the cluster. Recurring clusters reveal underlying rhetorical logics that would be invisible at the individual-trap level.
3. Falsifiability architecture with testable predictions
Each trap carries observable indicators, predicted co-occurrences, and Raad-like conditions under which its absence would disconfirm the framework. Makes the diagnostic a testable instrument, not just a coding scheme.
4. Cross-domain portability classification
Distinguishes mechanism-level traps (portable across evaluative domains) from domain-bound traps (specific to a field's institutional arrangements). Contributes to the qualitative generalizability literature by operationalizing portability as a testable property rather than a rhetorical claim.
Possible Venues
Organization Studies
Discourse analysis methodology. Institutional logics connection through trap clusters as discursive maintenance of proxy-oriented norms.
Research Policy
STS angle. How professional communities construct evaluative frameworks under transformative technology engagement.
Academy of Management Discoveries
Empirical paper surfacing unexpected phenomena. The six confirmed deductive traps fit their prediction-confirmation model well.
Information and Organization
Discourse, technology, and organizational evaluation at their intersection. Fit with the journal's recent sociomaterial turn.
Sequencing Contingencies
The proxy seduction mechanism paper was desk-rejected at Academy of Management Review in March 2026 and pivoted to two parallel submissions: an Organization Studies Perspectives version (mechanism contribution, phenomenon-based theorizing) and an International Journal of Management Reviews version (review-based problematization challenging the evaluative continuity assumption across five literature streams). The AI Alibi paper is co-authored and targeted at MIT Sloan.
This discourse traps paper sequences after the mechanism paper lands somewhere, giving it a citable theoretical foundation rather than requiring it to establish the mechanism from scratch in a shorter format. If either OS Perspectives or IJMR lands, this paper can reference the mechanism and focus on the three-phase method, the co-occurrence analysis, and the cross-domain portability contribution.
If both mechanism submissions fail to land, the paper can still proceed by presenting the diagnostic as a grounded contribution informed by the framework, with the mechanism sketched briefly rather than fully developed. The falsifiability and cross-domain contributions stand on their own.
Underlying Research Artifact
The diagnostic that grounds this paper is maintained as a living fieldwork tool at discourse-traps.html. As evidence accumulates and the cluster log grows, the paper concept stays grounded in the working instrument. The paper reports on the state of the diagnostic at the time of submission, not on a frozen taxonomy produced for publication.
Open Questions
- Pre-interview versus post-interview evidence base: does the paper draw only on discourse already collected, or does it incorporate the empirical phase interviews once those are complete? Affects timing and ethics approvals.
- Co-occurrence quantification: does the paper present the matrix as qualitative evidence (cluster patterns illustrated with examples) or as quantitative output (association measures across the cluster log)? Venue-dependent.
- How much mechanism to restate: resolves itself once the mechanism paper's venue is known. If OS Perspectives lands, the paper cites it and keeps mechanism exposition to one paragraph. If both fail, the paper carries more mechanism weight.
- Strategic versus constitutive tagging as analytical contribution: the mode tags (strategic, constitutive, hybrid) are currently metadata on the diagnostic. Whether they become a distinct analytical contribution in this paper, or are folded into the findings, is an open editorial call.