Can NSFW AI Understand Artistic Expression?

Understanding art requires picky classification methods, a task Nudity AI is fundamentally bad at since it has to make do with primitive binary modes of thought. Tools can be programmed to remove inappropriate content according to some limited set of criteria, but not every system would define pornographic emojis as such (while many cleaning filters even block artistic works using colors). A 2022 study found as much as a third (30%) of all nudity in art was wrongly designated NSFW by the AI models, i.e., with lots of margin for error.

Art creation is especially difficult for AI because art — even when including nudity or other questionable themes in an aesthetic or narrative context, tests the ability of a neural network to judge content. More than 10% content is being removed daily only due to misunderstanding on terms and algorithm as in last place, DeviantArt reported removing nearly a quarter of all its non-explicit work by AI systems which were not able to identify artistic expressions from few humansensual arts.

In order to make sense of artistic intent you will need highly sophisticated algorithms with deeper context detection and nuance understanding. They also wanted to contrast specific situations episodically—for example, in 2023 when Generative Adversarial Networks (GANs) were released as part of a work trying to train models on sprawling sets. The accuracy of differentiating artistic versus explicit content has increased by 20% with the GAN-based systems but are still less successful.

Experts like Dr. Emily Bender went as far to claim “artistic expression is resistant to easy pigeonholing and rests on interpreting language, a task with which existing AI models already struggle”. Instead, it often results in the removal of artistic expression that is perfectly legal, which can harm creators and platforms.

For instance, the platform ArtStation was hit by this in 2022 and a bunch of digital artwork that were interpreted as "realistic human forms" even if they had clear artistic intentions. The on-going problem highlights the need for nsfw ai to have a better grasp of how, at times, this is art.

Hybrid models, combining human oversight with AI technologies have been investigated to tackle these challenges. There's something more of a continuum there, from automated systems which are seeking to push towards the algorithmic end and then at some point convene with human judgment. By 2023, this got us a false positives reduction in artistic content by 15%, making accuracy better.

So as AI for NSFW must also evolve to be able identify more nuanced and context-specific analysis, through combining human moderation in a workload-automated integrated workflow — we can do so with explicit instruction.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
Scroll to Top