Collect a clue from text or an image
A typed name, phone tail, platform handle, plate, screenshot, or photo can all start the workflow.
Blacklist Assistant already runs as a private iPhone app. Today it supports direct lookup, image-assisted lookup from photos and screenshots, manual confirmation, and editable blacklist records inside one workflow.
Today it has working surfaces for lookup, add to blacklist, confirmed blacklist review, and account/settings on iPhone.
The current flow is built around moving from clue to decision without losing context. You can start from text, images, or a remembered identifier, then decide whether the result is confirmed or still needs review.
A typed name, phone tail, platform handle, plate, screenshot, or photo can all start the workflow.
The app checks whether that clue already points to someone the household decided to avoid.
When the evidence is incomplete, the case can sit in review instead of becoming a bad confirmed record.
Once confirmed, the detail page stays editable so the record can get cleaner instead of being recreated.
These are real screens from the latest app. The UI is in Chinese today, but the workflow is the same one described on this page.
That means the app has to hold different clue types, let someone stop and review uncertain cases, and keep a confirmed record easy to reopen later.
Name, phone tail, platform account, plate, and image text can all point into the same decision flow.
Not every clue is strong enough on its own, so the app keeps space for a deliberate review step.
Once something is confirmed, the detail page stays available for cleanup, correction, and later reuse.
Today’s iPhone app already handles the operational core. The next steps deepen the same direction rather than changing it.
Tell Coratina which services matter, whether the record is just for you or should protect others at home, and what repeat encounter you are trying to avoid.