User generated content from your customers is powerful. It puts your advertisers in control of their message and it creates real engagement between your clients and their OOH campaign.
The down-side of user generated content is moderating and approving what is flowing from the users to your digital screens.
As the volume of user generated content increases, there is an ever increasing workload to constantly moderate messaging coming from your advertisers.
Lucit is deploying a new set of tools to help solve some of these issues, and alleviate strain on traffic and ad moderation teams which can cut the amount of work required to moderate content by up to 80-90%. Operators can approve ads at any time of day from either the desktop application or from their phones with the Lucit app.
With Lucit Moderation and the AI Ad Approver, we have deployed a system that is built on 2 engines: User Trust and AI Driven Image and Text Moderation.
User Trust Engine
Lucit authenticates users in-app and then keeps track of which users made modifications to any portion of a creative. We then create a trust model between the operator and the user that allows each operator to decide which users they do and do not trust.
In addition to a “Human User” there is also machine generated content that is imported into the system from data partners (vehicle data, real estate data, eCommerce, etc.). Each of these data feeds is then attached to a special type of authenticated bot-user that is attached to the data coming in from these data partners.
Image Detection and Text Moderation
All content is passed through these detection tools to attempt to detect anything from graphic content such as nudity and violence to hate symbols and profanity. Based on our machine learning models we then assign every image an automated moderation score.
The moderation score is a “probability that this image is safe” measurement that errs on the side of “not safe” when there is even the slightest chance that the image may contain something innappropriate.
Once the user trust detection and moderation scores are complete, we can then determine whether to automatically approve the creative, or queue it for human review.
And because the Lucit system provides each operator individual fine-grained controls, they can customize their workflows according to their needs.
To understand why this dual model of User Trust and AI Moderation is useful, we have to think about what AI Powered Image moderation is likely to miss. For some operators, they may choose to have internal rules on messaging and what types of messaging is not allowed on their screens. It could be messaging related to some political issues, it could be messaging that is controversial, or some other factor that an AI moderation system will have a difficult time automatically approving.
The User Trust model helps to ease this burden.
For instance, let’s say you have Jim at the Auto Dealership. And Jim has an employee that helps with his advertising, John . You have had a professional relationship with Jim for years, but John is new.
In addition to this, the majority of Jim’s ads are auto-generated vehicle ads that are being created by his data-fed inventory.
You can trust Jim, and you can trust the Data Feed, but you might not want to trust John quite yet. Even though they are all operating under the same auto-dealership account, the user trust models allow the operator fine-grained control over which creatives can pass through the AI Approval System automatically.
Learn more about Lucit’s Moderation and Approval system at https://lucit.cc/lucit-content-security-overview/
Lucit, founded in 2019, makes software that allows advertisers to dynamically control digital billboards with a smartphone app and run automatically generated creatives linked to inventory systems for Automotive, Real Estate, Motorsports and Retail eCommerce systems. For more information about Lucit, visit https://lucit.cc