Genesis
The discourse traps were not designed. They were discovered. The first eight emerged inductively from close reading of practitioner discourse (LinkedIn posts, conference talks, design community threads) during the evidence-gathering phase for the PSF mechanism paper. Practitioners kept making structurally similar moves: acknowledging the evaluative challenge AI creates, then immediately neutralizing their own insight through recognizable patterns.
Once the PSF mechanism was articulated, it became possible to ask: what other discursive moves would the mechanism predict? Six additional traps were deduced from the theory. All six were subsequently confirmed in independently collected evidence.
Three more emerged analytically from systematic re-examination of accumulated evidence, operating not within individual discourse but between actors (legitimation circuits, performative constitution, domain expertise dismissal).
Four more were predicted by applying the PSF mechanism to a second domain (AI diagnostics in India), where the mechanism interacts with domain-specific features (regulatory gatekeeping, life-critical stakes, workforce shortage) to generate discourse moves absent from the software evidence base.
The sequence matters: inductive discovery, deductive prediction with confirmation, analytical emergence, cross-domain prediction. That four-phase epistemological progression is itself a methodological contribution. Frameworks rarely document all four in a single study.
What the Paper Would Argue
Existing research on AI in organizations focuses on implementation barriers, productivity measurement, and workforce impact. What remains underexamined is how practitioner discourse itself functions as a mechanism that naturalizes the erosion of evaluative capacity. Practitioners do not resist AI or ignore its challenges. They recognize the challenges with precision, then deploy discursive moves that absorb the insight before it can produce organizational change.
The paper would introduce and demonstrate a diagnostic framework for identifying these moves, show that they cluster in predictable ways that produce emergent narratives, specify the conditions under which discursive recognition does and does not convert to organizational action, and distinguish which traps are consequences of the PSF mechanism itself (appearing wherever AI engagement transforms evaluative capacity) from those that are consequences of the specific domain in which they were first observed.
Four Publishable Contributions
Four epistemological categories as methodological contribution
Eight inductively observed traps, six deductively predicted traps (all confirmed in independent evidence), three analytically emergent traps, four cross-domain predicted traps (derived from applying PSF to AI diagnostics). Documenting this sequence in a single study demonstrates a framework's generative power in a way that is unusual and methodologically rigorous. The deductive traps constitute a strong test (the framework predicted discursive patterns before they were observed), and the cross-domain traps extend that test to a structurally different context.
Co-occurrence clustering as compositional grammar
Specific trap clusters produce emergent discursive effects that no individual trap generates alone. A practitioner deploying five traps in a single post tells you something different from five practitioners each deploying one. The paper would propose a compositional grammar of proxy seduction discourse: individual traps as vocabulary, clusters as syntax. This is a contribution to discourse analysis method, not only to AI-and-organizations content.
Falsifiability architecture with testable predictions
Three conditions under which recognition converts to reflexive action: short feedback loops, skin in the game, independence from consensus. Raad as counter-case. The paper would generate testable predictions that empirical fieldwork can confirm or disconfirm. If practitioners in those conditions still fail to act, or practitioners outside those conditions do act, the framework is challenged. That turns a descriptive diagnostic into a testable theory.
Cross-domain portability analysis: mechanism-level versus domain-bound traps
Each of the 21 traps carries an explicit portability prediction: mechanism-level (should appear wherever PSF operates), domain-bound (specific to the domain where it was first observed), or uncertain (requires empirical testing to classify). Fourteen of the original 17 software-derived traps are predicted mechanism-level. One (Individual Agency Frame) is predicted domain-bound because diagnostic AI deployment is institutional, not individual. Four new traps were predicted from applying PSF to AI diagnostics in India (Moral Urgency, Regulatory Legitimacy, Shortage-as-Authorization, Accuracy-as-Outcome), three of which may be specific to regulated, life-critical domains. The portability predictions are themselves testable: if a mechanism-level trap fails to appear in diagnostics, or a domain-bound trap does appear, the classification is wrong and the theory learns something. This distinction between mechanism-level and domain-level discourse moves is a contribution to how organizational scholars think about the generalizability of qualitative findings across empirical contexts.
Potential Paper Narrative
Opening: The recognition paradox. Practitioners see the problem clearly and treat the seeing as sufficient. Motivating example from field evidence.
Literature gap: AI adoption/implementation literature assumes practitioners either resist or embrace. Missing: how discourse naturalizes transformative effects through sincere belief, not strategic evasion.
Method: Three-phase trap identification. Inductive coding of practitioner discourse → deductive prediction from PSF mechanism → analytical emergence from systematic re-examination. Transparency about epistemological progression.
Findings 1: The 21 traps, organized by level of reasoning (individual self-framing, temporal reasoning, structural reasoning, field-level dynamics). Representative examples from evidence base.
Findings 2: Co-occurrence patterns. The cluster is the primary unit of analysis, not the individual trap. What co-occurrence reveals about deeper discursive structures.
Findings 3: Recognition without reflexivity. When and why recognition converts to action (Raad conditions). Falsifiability architecture.
Findings 4: Cross-domain portability. Which traps are mechanism-level (14 of 17 predicted), which are domain-bound (1), which are uncertain (2). Four candidate traps from a second domain (AI diagnostics in India) that the software evidence base would not have surfaced. The portability predictions as testable claims about PSF's generalizability.
Discussion: Implications for organizational evaluation under AI engagement. The diagnostic as fieldwork infrastructure. The mechanism-level versus domain-bound distinction as a contribution to qualitative generalizability methodology. Limitations and boundary conditions.
Possible Venues
Organization Studies
Discourse analysis methodology. Institutional logics connection (traps as discursive maintenance of proxy-oriented norms).
Research Policy
STS angle. How professional communities construct evaluative frameworks for transformative technology.
AMD
Empirical paper surfacing unexpected phenomena. All six deductive traps confirmed fits their model of discovery-driven research.
Information and Organization
Discourse, technology, and organizational evaluation intersection. Closest fit for the full argument.
Sequencing and Contingencies
If AMR lands: Paper 2 in PhD pathway. AMR establishes PSF as theoretical framework. Traps paper demonstrates what the framework can see that other approaches cannot, including cross-domain portability predictions that make PSF falsifiable beyond the software context. Empirical interview paper (Paper 3) uses both the framework and the diagnostic as analytical infrastructure. Diagnostics extension (Paper 4, post-PhD) tests the portability predictions empirically.
If AMR does not land: PSF mechanism paper may need to go to IJMR or an alternative journal. The traps paper can still proceed, either as a standalone contribution (presenting the diagnostic as a grounded theory contribution informed by but not dependent on a published PSF) or sequenced after wherever the mechanism paper lands. The four contributions hold regardless of where PSF is published.
Timing: Develop in detail after the AMR submission is settled. The discourse diagnostic tool (now editorially clean, register-calibrated, with source links) is the research artifact the paper would describe and demonstrate. It continues to accumulate evidence in the meantime.
Open Questions
Can the traps paper stand entirely on pre-interview evidence (LinkedIn posts, conference talks, published practitioner discourse), or does it need interview data to satisfy reviewers? If the former, it could be written sooner. If the latter, it sequences after Paper 3's data collection.
Should the co-occurrence analysis be quantitative (frequency counts, network analysis of trap pairings) or remain qualitative (interpretive analysis of what clusters reveal)? Venue choice may determine this.
How much of the PSF mechanism needs to be restated in the traps paper versus cited from the published mechanism paper? This depends on which lands first and where.
Does the cross-domain portability analysis (Contribution 4) strengthen the paper enough to justify the additional complexity, or does it dilute the focus? One option: present the portability predictions as a concluding section that opens the door to future work, rather than as a full-weight finding requiring its own evidence presentation. The diagnostics domain scan provides the theoretical grounding for the predictions without requiring completed diagnostics fieldwork.