Employers obsessed with AI‑usage dashboards accidentally trained their staff to fake productivity, turning genuine work into a metric‑fueled wayang show.
According to a Financial Times (FT) story on 12 May 2026, Amazon employees have been are using an internal AI system called MeshClaw to generate low‑value, often unnecessary tasks simply to inflate their reported AI usage metrics.
The behavior, which the publication describes under the label “tokenmaxxing”, illustrates how some workers are optimizing for internal numbers rather than for meaningful productivity gains attributed to AI implementation.
The scheme works like this: staff can use MeshClaw to create AI agents that can connect to workplace software, triage emails, initiate code deployments, and interact with collaboration tools. In theory, the platform is designed to automate repetitive, time‑consuming work, freeing developers to focus on higher‑value projects. In practice, however, the FT article suggests that many employees in Amazon treat it as a way to rack up token‑usage statistics, often by running trivial or redundant operations that contribute little to actual output.
The report attributes the move to Amazon introducing internal targets requiring more than 80% of developers to use AI tools each week, alongside dashboards and leaderboards that track token consumption by team and individual.
Although the tech firm tells staff that these metrics will not be formally used in performance evaluations, several employees had told FT that they believe managers are still watching the data, which has created pressure to “game” the system and appear highly engaged with AI.
FT has framed this dynamic as part of a wider Silicon Valley trend, where firms increasingly tie status, access, and even informal bonuses to raw AI usage numbers, pointing to similar “tokenmaxxing” behavior at Meta, where an internal leaderboard once ranked top token users among tens of thousands of employees, until the social media firm took it down amid criticism of the waste it encouraged.
By highlighting how workers are driven to inflate token counts, the FT report raises broader questions about whether current AI‑adoption metrics reward genuine efficiency or simply superficial activity. Other motivations for such gaming of the AI usage policies include career signaling/advancement, workload bargaining, and job security leverage.