Chat & Content Moderation Dashboard

PubNub's Moderation Dashboard is an open-source, configurable GUI application that enables chat moderator capabilities for your live chat apps, enhancing safety and control within your communication platform.

Using Moderation Dashboard, you can perform common moderation tasks, such as:

  • Moderating messages (text, pictures, photos, and graphics) for inappropriate content, ensuring a safer environment.
  • Responding to reports of inappropriate behavior from other users (the so-called "user flagging").
  • Moderating user permissions by blocking or muting them from a channel or banning them completely from the application that is being moderated, potentially considering age limits for access.

Moderation Dashboard works best as a tool for applications that have been properly configured to support moderation. PubNub provides such a reference chat application that you can use as a starting point to experiment with moderation features. If you'd like to add moderation capabilities to an existing chat app built with PubNub, check the Required Configuration document to learn how to prepare your app for moderation.

One of the main features of Moderation Dashboard is message and content filtering, spanning both text and images:

Text moderation

Moderation Dashboard supports automatic text moderation, which administrators can configure to filter words or entire messages in their chat apps. This feature allows for the automatic masking or blocking of words or text messages before they are published on channels if they are on a predefined forbidden word list or are detected as inappropriate by third-party providers.

  • Word list moderation - this lets you manually set a list of forbidden words or select a pre-defined list in one of several supported languages. You can choose to have moderated words replaced with a masking character or to block the entire message. This enhances safety by ensuring inappropriate content is never visible in your app.

  • Automatic detection with third-party services - by leveraging external services like Tisane or Sift Ninja (for existing users), you can apply advanced logic to detect and filter inappropriate content automatically. This option allows automated moderation decisions based on categories like bigotry, cyberbullying, and profanity, which is more dynamic than a static list.

Image moderation

Similar to text moderation, image moderation is a powerful feature that allows real-time scanning and filtering of images for inappropriate content before they are shared within the chat app.

  • Enabled by Functions, this feature integrates with Sightengine to provide robust image moderation capabilities. This tool can block images that do not meet the safety standards set by the administrators by analyzing them against predefined risk factors.

  • Image moderation settings are easily configurable via the Dashboard, allowing channel-specific application and the option to reroute moderated images to a separate .banned channel for review.

Call for contribution

Moderation Dashboard, as any other open-source project, values contributions. If you want to help the project evolve, raise an issue or create a pull request directly in the Moderation Dashboard repository.

Get Started

Run the local GUI to learn how to moderate messages, users, and channels. Use filtering and detection features to maintain a secure chatting environment.

Demo

Explore an interactive Moderation Dashboard demo, sample user and channel data. Later, see your moderation in action within an integrated React live chat app.

Source Code

View source code, create issues, and pull requests to contribute to the tool's development.

Tutorials

Set up Moderation Dashboard locally and run it together with the moderated chat app.

Features

Moderation Dashboard supports:

Last updated on