65 employees. Over 1,500 curated games. 600-plus independent developers. 100 million users every single month. Those are not the operational numbers of a team running traditional QA.
They are the output of a browser gaming website that has automated it. Poki’s approach to playtesting makes these numbers possible, and the mechanics behind it are worth examining for anyone thinking about quality control at web gaming scale.
The Web Gaming QA Bottleneck
Quality control in game publishing has traditionally meant human testers running builds, logging bugs, and filing reports.
That model works for a studio shipping one title per year. It does not work for a browser gaming operation publishing hundreds of titles to a live audience of 100 million.
The math collapses fast. Each new title requires onboarding tests, engagement checks, device compatibility validation, and audience-fit assessment before it goes live.
Manual testing at that volume would demand a QA team far larger than the entire organization. The real challenge is not testing depth. It is testing throughput.
This is the operational reality that web gaming websites at a meaningful scale have had to confront directly. Browser gaming is no longer a niche category.
One billion gameplays per month on a single site demands a fundamentally different approach to quality operations, and automated quality control methods have become the standard across industries facing similar throughput pressures.
How Web Gaming Sites Build Automated Playtesting Infrastructure
Automated playtesting allows for feedback that mirrors real human play with a browser game. Web gaming platform Poki built its automated playtesting tool directly into the developer submission pipeline, and the mechanics are straightforward to follow.
Developers upload a game build through developers.poki.com. The system then routes that build to real users already active on the site through a feature called the Mystery Tile. Casual gamers see an unidentified tile while browsing.
Clicking it gives them a one-time opportunity to try the unreleased build on their own device, at their own pace.
If they consent to having their session recorded, everything is captured: screen video, button inputs, console output, session length, device type, and country of origin. Developers access all of it through a dashboard, typically within hours of sending the test.
No contract is required. The tool is free to use before any publishing agreement is in place.
The speed is the operative variable here. Traditional playtesting with recruited participants takes days to organize and longer to analyze meaningfully.
Routing builds through an active base of 100 million users by reusing its existing player base instead of relying on external recruitment.
At least one developer using the tool saw average session length increase from three minutes and 49 seconds to seven minutes and five seconds in under four days through iterative changes informed directly by the recorded data.
For small studios, that feedback loop is significant. Independent developers, including Martin Magni, creator of the physics-based browser title Drive Mad through his studio Fancade, are part of Poki’s developer community.
Teams at that scale rarely have dedicated QA resources. Free, automated audience testing changes what early-stage development can look like for them.
The Human Layer That Automation Cannot Replace
Automated data collection and automated decision-making are not the same operation. Poki keeps this distinction in its process.
Every game on the site goes through personal review by Poki’s team before it goes live. The curation process includes direct calls with developers, evaluation of portfolio fit, and partnership decisions made through personal contact rather than algorithmic approval.
The playtesting tool handles the data side: session recordings, audience response metrics, and engagement duration. What that data means for a given game’s readiness to publish is still a human judgment call.
Poki co-founder Michiel van Amerongen has described the company’s goal as giving developers direct feedback and infrastructure support, not just a distribution outlet. The playtesting infrastructure is part of that support layer. It informs curation decisions. It does not make them.
This split between automated data gathering and human-led curation reflects how quality control operates in most industries that have scaled successfully.
Sensor-driven inspection in manufacturing flags anomalies automatically. A production engineer still interprets what that pattern means for the run. The logic translates directly to browser gaming.
Why This QA Model Fits Web Gaming’s Operating Realities
The structural takeaway from Poki’s approach is clear. A team of 65 managing 1,500-plus titles across 600-plus developer relationships can only function if the data-gathering layer runs without manual overhead at every step.
Automated playtesting handles throughput. Human review handles judgment. Neither replaces the other.
For developers, this means real audience feedback at a stage where most studios would otherwise rely entirely on internal testing or ship blind.
For web gaming websites at this scale, it means an intake pipeline that maintains curation standards as the catalogue grows, without adding proportional headcount to QA.
That pairing of automation-driven data collection with human-centered quality decisions is likely the model that web gaming operations across the sector will adopt as browser gaming’s user base expands. The data can be automated. The standards cannot.
