Domain/Problem Statement:
CivitAI provides an extremely broad and deep range of real person of interest (PoI) imitating models
Using these models to generate NSFW images and posting them to the CivitAI site is a breach of ToS and is a frequent occurence
Detection of these images relies on a blend of facial recognition and human intutition
In the case where a PoI is particularly well known, this is likely to be quickly filtered out using available tools.
However, if a PoI is less well known (outside the anglosphere, for instance), the images can go undetected, or if they are detected by a member of the community may be dismissed by moderators if they are not recognisable, or not present in the data set of recognition tools.
CivitAI aims to host sponsored advertisements in the near future, and as such must be effective in maintaining user-generated content to a high standard of compliance with the ToS.
Proposed changes:
For automation:
automated tools for facial recognition must be used when indicated by pre-existing tagging solutions (NSFW detected), and automatically reject at a high level of confidence, or generate a ticket in less certain cases
broad/broader/broadest datasets must be identified and incorporated into existing tooling where possible
tooling should err towards caution (i.e., ads must not appear next to unauthorized or lewd celebrity impersonations, or otherwise).
For creators:
LORA/Embedding creators must be able to submit matching images (from training data) for their celebrity creations, which can be incoporated into facial recognition matching
User-submitted matching images should be manually verified before submitting to facial recognition tooling
Users should be incentivised to submit accurate images from training data when creating a PoI model
Users could be unable to publish PoI models without providing accurate images from training data
Users should be advised if their PoI isn't present in recognised detection data
For image generators:
Users must be informed if their NSFW is detected to contain a PoI likeness
Users must be given an opportunity to appeal the detection
For Moderators:
Moderators must be able to utilise the tool on demand with preexisting matching information
Moderators must be able to provide 1 or more images of a missing PoI to the matching model for evaluation of images (sample use case: a LORA based on a retired 1990s glamour model is used to generate infringing images, the model is not present in any acknowledge data set but is identified by the community)
Moderators must be able to flag inappropriate or inaccurate detection data supplied by creators
Moderators must able to and train to see and interpret underlying matching data to provide feedback to users and further develop the tool
For PoIs:
PoIs should able to declare after verification directly or via representative that they do not consent to their likeness to be used in CivitAI image data at all, and as such any images detected using this likeness are treated as infringing ToS
PoIs must be protected from infringing likenesses (including those which are incidental or unintentional)
A robust solution, as described above, shall:
be self-improving
be open source for external review by all interested parties
be reviewed by independent experts/concerned bodies
be accurate on a variety of medium
reduce overall administrative and moderation burden
improve moderation accuracy and trust within the community
provide CivitAI with a robust defense in the event of infringement
provide other stakeholders with confidence in CivitAI
provide pubic safety
protect the rights of PoI
A potential subset or extension to this service could also be developed to identify at-risk persons, CSAM and so forth.
Please authenticate to join the conversation.
Awaiting Dev Review
π‘ Feature Request
About 2 years ago

MajMorse
Get notified by email when there are changes.
Awaiting Dev Review
π‘ Feature Request
About 2 years ago

MajMorse
Get notified by email when there are changes.