After quitting Genshin Impact, Honkai: Star Rail, and Zenless Zone Zero, I recently became obsessed with Wuthering Waves (yes, the game behind the meme “xxx only needs to xxx, while xxx has a lot more to think about”). When talking about these gacha games, the evil loot box is an essential part; where there are pulls, there will always be wins and losses on characters and weapons, spawning a bunch of third-party mini-programs like “xxx Helper” or “xxx Workshop,” one of whose key features is gacha history analysis.
The story began when a friend wanted to see my Star Rail pull records, but upon opening the mini-program I used before, all previously saved histories were gone. I tried downloading the cloud version of Star Rail to re-import the gacha link and recover the data, but after half a day of tinkering, the mini-program kept reporting an invalid gacha URL, and the historical records were never retrieved. I couldn’t get over it, so I decided to build my own gacha log analysis tool, keeping the data in my own hands—hence this gacha analysis tool was born.

Astrionyx
Astrionyx is a Next.js-based web app that supports analyzing different banner histories, allows manual import or data updates, and can also import data via API.
It can be deployed on Vercel as well as on your own server, supporting both MySQL and Vercel Postgres. Currently, overseas traffic is resolved to Vercel and uses Vercel Postgres, while domestic traffic is resolved to my own server using my own MySQL database—super convenient.
During development, I ran into quite a few interesting problems. If you’re planning to write a similar app yourself, I hope this helps.
Data Import & Update
Data Import
Like most games, Wuthering Waves provides no official API to export gacha data. Its history is shown by generating a temporary link when the user clicks the “Gacha History” button, opening it in an in-game browser. We can capture this link (starting with https://aki-gm-resources.aki-game.com/aki/gacha/index.html) via packet sniffing or log files.
The page POSTs to the API https://gmserver-api.aki-game2.com/gacha/record/query to fetch each banner’s pull history. The request contains key info like player ID (playerId), server ID (serverId), banner ID (cardPoolId), all extractable from URL parameters:
The banner type parameter cardPoolType ranges [1, 7], mapping as follows:
Create a backend API that forwards to the official API to get each banner’s history.
Data Update
In games like Wuthering Waves, pulls come in “10-pull” batches.

10-pull
As shown, a “10-pull” can yield two identical items—two records with exactly the same attributes including pull time—so the JSON looks like:
This makes it impossible to tell which records have already been imported when updating, hurting stat accuracy. We need an ID that can distinguish two identical items even at the same timestamp.
We can build a unique ID from banner type ID + timestamp + draw index, where draw index counts from 1 for pulls at the same timestamp. Thus, the two identical weapons get uniqueIds 01174845445601 and 01174845445605.
Even if imported data overlaps with existing records, new pulls can still be identified and the database updated correctly.
Probability Calculation
The stats overview page uses the ECharts library to render a pull-probability line chart as a component background, showing the gacha system’s probability distribution. Chart data is computed by two main functions:
Theoretical Probability
According to Bilibili uploader “A Balanced Tree”’s Brief Analysis of WuWa Gacha, WuWa rates follow this model, where is the pull count:
Build a theoretical probability function calculateTheoreticalProbability:
Actual Probability
Since the tool is mainly for personal use, sample sizes are small; some pull counts may have no data. Direct frequency estimation causes wild swings—0% or 100% extremes.

Frequency estimate
To avoid this, we apply Bayesian smoothing:
Where:
: 5-star count at pull , : total pulls at position , : theoretical probability, : smoothing factor
Behavior:
When is small, leans toward theoretical When is large, leans toward observed frequency
Smoothed result:

Bayesian smoothing
Implementation:
Auto Deployment
As mentioned, Astrionyx is deployed on both Vercel and my own server. Every time I push, I used to have to SSH in, pull, and build—super tedious. The theme author once wrote a workflow that auto-builds and deploys the theme to a remote server; I adapted it for Astrionyx.
But GitHub-to-server network quality is awful, transferring the build artifact took 20 min+ on average. Luckily, Cloudflare R2 offers 10 GB free storage and zero egress fees, and download speeds inside China are decent, so it can act as a “transfer station.”
The main workflow now uploads/downloads with 5 retries (single downloads can still fail due to network hiccups; 5 retries solve most cases).
With this workflow, every push to main automatically builds and deploys Astrionyx. Using R2 as a transfer station cuts deployment time from 20 min to under 4 min—saving lives.

Optimized with R2
That's all🎉.