This is the same story as every productivity tool that prioritizes volume over quality.
A new survey reveals a stark disconnect: executives report major productivity gains from AI, while workers describe being buried in low-quality AI-generated content that requires more work to fix than doing it manually. The gap between perception and reality is massive.
Executives see output metrics going up and declare victory. Workers see their job becoming "editing garbage" instead of "doing good work." The technology works fine. The implementation is the disaster.
The Guardian reports that workers have coined a term for what they're experiencing: "workslop." It's the deluge of AI-generated content - reports, emails, presentations, code - that looks professional at first glance but requires significant human intervention to be actually useful.
The pattern is familiar from previous waves of automation. A tool promises to make work faster and easier. Early adopters demonstrate impressive results. Companies mandate adoption. Metrics improve. Then reality sets in.
What the metrics don't capture: the time spent correcting AI errors, the cognitive load of constant fact-checking, the frustration of fixing problems that wouldn't exist if you'd just done it yourself, and the quality degradation that comes from treating AI output as a starting point rather than recognizing it might be worse than starting from scratch.
Executives see: 50% more reports generated per employee. Workers experience: spending 50% of their time editing AI-generated nonsense instead of doing analysis.
Executives see: faster response times to customer inquiries. Workers experience: cleaning up AI responses that missed context, got facts wrong, or responded to the wrong question.
Executives see: more code committed per developer. Workers experience: debugging AI-generated code that works in demos but fails in production.
The disconnect isn't just about measurement. It's about what work is for. If you think work is about producing documents, AI is great - it produces lots of documents. If you think work is about solving problems and creating value, AI becomes another layer of complexity to manage.
Some workers report that AI genuinely helps with tedious tasks: boilerplate code, first drafts, research summaries. The problems emerge when companies treat AI as a replacement for expertise rather than a tool that requires expertise to use effectively.
The "workslop" problem is particularly acute in fields that require accuracy and context. Legal work. Medical diagnoses. Financial analysis. Journalism. These aren't domains where "good enough" output is acceptable, and they're not domains where errors are easily caught.
What's happening is a replay of every technology adoption curve where executives optimize for metrics that don't capture reality. Remember when email was going to make everyone more productive? Instead, it created an expectation of instant response and buried people in cc threads. Remember when open-plan offices were going to boost collaboration? Instead, they destroyed focus and forced everyone into headphones.
AI is following the same pattern: promising liberation from tedious work, delivering a different kind of tedious work that's harder to quantify but just as draining.
For workers, the message is clear: learn to use AI tools effectively, but don't let management's enthusiasm for productivity metrics force you into workflows that make your job worse. For executives, the message should be: your metrics are lying to you, and your employees know it.
