Build

Tutorial: Moderate Chat with PubNub's Moderation Dashboard

Chandler Mayo on Sep 27, 2021
Tutorial: Moderate Chat with PubNub's Moderation Dashboard

This tutorial will walk you through how to configure and use the new open-source PubNub Moderation Dashboard to moderate a chat application. After reading this tutorial you will be able to use the tools within the Moderation Dashboard to moderate a user’s messages, as well as block and ban users to enforce community guidelines.

If you’re not yet familiar with the new open-source PubNub Moderation Dashboard, you should review our Open-Source Moderation Chat Dashboard Quickstart to get started with chat moderation. The Moderation Dashboard makes it easy to moderate chat messages and images for any of your PubNub API keys. It’s also totally free for you to take and use.

It’s not easy to build moderation tools like chat and image moderation, and building those features takes development time away from the things you actually want to focus on. We built the Moderation Dashboard as both a complete implementation of moderation tool features and as a starting point for integrating moderation features into applications you’ve already created. 

Before you can follow this tutorial, you must read our Open Source Moderation Chat Dashboard Quickstart to start running your own instance of the Moderation Dashboard. You’ll need a running instance of the Moderation Dashboard for this tutorial. Once you’ve got a running instance of the Moderation Dashboard, come back to this tutorial to learn how to start a demo chat application and moderate user activity with the Moderation Dashboard you’re running. 

Let's get started moderating user chat activity.

Running the Moderated Chat App

The Open Source Moderation Dashboard provides you with a GUI for moderating user chat messages. However, you’ll need a chat application to moderate in order to try out the features for yourself.

There are requirements and design patterns to consider if you plan to use all of the features available in the Moderation Dashboard. The moderated chat application requirements and design patterns are described as part of the Moderation Dashboard documentation. We have also created a sample moderated chat app as part of the React Chat Components repo. 

For this tutorial, we will use the Moderated Chat App Sample that’s part of the React Chat Components repo. We will run it locally alongside the Moderated Dashboard from the Open Source Moderation Chat Dashboard Quickstart.

Environment set up

You’ll need a free PubNub account to run the Moderated Chat App Sample. The Moderated Chat App Sample is part of the React Chat Components repo and works well with the Moderation Dashboard you should have running from our Quickstart. 

How to set up your Node.js development environment

To start using our Moderated Chat App Sample, you first need to install Node.js.

If you’ve already installed Node.js, go to the next section. You may have followed this step in the Quickstart.

If you don’t have Node.js installed, visit the Node.js Downloads page and select the installer for your platform.

Open your terminal and run the following commands to check the installation:

You should see a version number after you run these commands. 

How to set up your Git to clone the Moderated Chat App Sample

To clone the Moderated Chat App Sample from GitHub, you need Git installed.

If you’ve already installed Git, go to the next section. You may have followed this step in the Moderation Chat Dashboard Quickstart.

To install Git, follow the Getting Started - Installing Git guide.

Open your terminal and run the following command to check Git installation:

You should see a version number after you run the command.

Your development environment is now ready. Next, you need to configure your PubNub API keys for React Chat Components. 

Chat API setup for Moderated Chat App Sample

You’ll need a PubNub account to use the PubNub React Chat Components. The end result will be two API keys that you’ll use in your chat application. A PubNub account and API keys are always free.

  1. You’ll first need to sign up for a PubNub account.

  2. Sign in to your PubNub Dashboard.

  3. Go to Apps.

  4. Click Create New App.

  5. Give your app a name, and select Chat App as the app type.

  6. Click Create.

  7. Click your new app to open its settings.

  8. When you create a new app the first set of API keys is generated automatically. However, a single app can have as many keysets as you like. PubNub recommends that you create separate keysets for production and test environments.

  9. Select a keyset. 

  10. Enable the Presence feature for your keyset.

  11. Enable the Storage & Playback feature for your keyset. 

  12. Enable the Files feature for your keyset and select a region.

  13. Enable the App Context feature for your keyset and select a region. You must enable User Metadata Events, Channel Metadata Events, and Membership Events.

  14. Save the changes.

  15. Copy the Publish and Subscribe keys for the next steps.

Running the Moderated Chat App Sample

You’re ready to get started with the Moderated Chat App Sample. 

First, open your terminal and clone the React Chat Components repository. This repository contains the open source PubNub powered React chat components and the Moderated Chat App Sample.

Navigate into the samples folder and install the dependencies. It’s normal for this process to take a few minutes to install the dependencies with npm.

Now you need to configure the components with the API keys you obtained from your PubNub dashboard. These keys allow you to send and receive messages on the PubNub network with the PubNub real-time messaging API. You must enable the Channel Presence feature, the Storage and Playback feature, and App Context feature for your API keys for the chat components to work properly with full functionality.

Copy the .env.example file as .env. Then paste your Publish and Subscribe keys from the API keys you created in your PubNub Dashboard.

The Moderated Chat sample needs to have details about the chat users, like their names and avatars, as well as details about the channels that these users are interacting in. This setup step creates a set of random users, channels, and membership information that the sample will use. Create object metadata for the User and Channel Object memberships:

You’re ready to start the application. You may be asked if you want to run the application on another port if you already have the Moderation Dashboard running. Type ‘Y’.

The React Component Chat samples should now open an html webpage at http://localhost:3001/react-chat-components/samples

Or (if the Moderated Dashboard isn’t running already)

http://localhost:3000/react-chat-components/samples

Click on “Moderated Chat” in the menu to open the moderated chat sample that works with the Moderation Dashboard.

Now you’re ready to try all the features the Moderation Dashboard has to offer alongside the Moderated Chat App Sample.

Walkthrough of chat moderation features

In the Moderated Chat App Sample you will be assigned a random user and be given a few channels that you are a part of. Make a note of your user’s name and the channels that you have access to. If you refresh your browser you will be assigned a new user.

Key notes for using the Moderated Chat App Sample 

  • When you ran ‘npm run setup’ earlier the Moderated Chat App Sample created object metadata for the users, channels, and memberships.

  • You can send messages with an image attachment, and these messages may be moderated if you have used the dashboard to enable automatic text or image moderation.

  • The Moderated Chat App Sample handles changes in a user’s object metadata set by an Admin using the Moderation Dashboard. 

    • For example the Moderated Chat App Sample properly disallows a user who is banned, blocked from a channel, or muted from a channel, from doing the disallowed activities.

  • Moderated Chat App Sample reflects changes to previously sent messages that have been changed or modified by the Admin using the Moderation Dashboard.

Feature demo for Moderation Dashboard

You should have both the Moderation Dashboard and the Moderated Chat App Sample open in different windows. Login to the Moderation Dashboard using the same credentials that you used for the account where you created the keys for the moderated chat sample. Select the app / API keys you created for the Moderated Chat App Sample. You may need to refresh the window if you had the Moderated Dashboard running before you followed this tutorial.

After selecting the app that you want to moderate you’ll be taken to the Overview page. The Overview page displays details about the app.

Configuring automatic text moderation

The Moderation Dashboard supports text moderation with wordlists or using a third party API. To configure Text Moderation, expand the Settings menu on the left hand navigation and select Text Moderation.

When the Text Moderation page is loaded, the Moderation Dashboard will make an API call to the PubNub Provisioning API to check if any Functions have been deployed. If it detects that there is an existing function for Text Moderation, it will populate the GUI with the current configuration. If this is the first time you have used the Moderation Dashboard with the Moderated Chat App Sample, it will detect no configuration.

Both Word List and Automatic Detection (3rd party API based) Text Moderation options have an on/off toggle at the top. Turn this toggle on before configuring either option.

Both text moderation options start with the Channel ID field. Specify a single channel to apply moderation to, apply moderation to all channels (set the channel pattern to “*”), or choose the “Apply to all Channel IDs” check box.

Selecting all channels is the easiest way to demo the text moderation functionality. For production apps we recommend using a channel pattern like “moderated.*” or “public.*” to reduce the number of unnecessary calls to the Text Moderation Function.

The remainder of the Text Moderation configuration differs depending on if you are configuring for Word List or Automatic Detection.

Configure automatic word list based text moderation

Word List-based moderation is a simple profanity filter that checks a message to see if it contains any words that need to be moderated. This is simple, effective, and requires no 3rd party services.

The configuration allows the user to select a language, and provides an option to “Use Default words”. Administrators can create their own word lists or modify the default word lists for any language.

Administrators can decide to have any moderated words replaced by a mask character, or can choose to have the entire message blocked. When the Mask Word option is selected, the Administrator can specify the character that they want to use for the masking.

There is a checkbox option to “Route messages to the banned.*”. When this option is selected, the moderation logic will route the original unmoderated message to a channel whose ID matches the published channel, but is prefixed with “banned.”. For example if the moderated message was sent to channel “foo”, the original message will be sent to “banned.foo”.

Start with choosing the “Mask” and “Route to banned channel” options to demo Word List based Text Moderation. Try other options if you prefer. 

Once it is configured to your liking, hit the Save button to save the moderation chat settings. This can take a moment or two to complete. After the Moderation Dashboard indicates that the moderation function is deployed successfully, go back to the Moderated Chat App Sample and try sending a message that should be moderated.

Configure automatic detection (third-party) text moderation

Tisane.ai API based Text Moderation can scan chat messages for bigotry, personal attacks (cyberbullying), hate speech, criminal activity, and sexual advances as well as provide profanity filtering. It provides support for four languages.

Using Tisane requires a (free) Tisane account. You can also hover over the tooltip next to the Tisane.ai API Key entry on the configuration page for a link to the signup page.

After setting the channel pattern and Tisana.ai API Key you can adjust the thresholds for the various types of moderation that Tisane performs. Moving the slider to the right (towards “Low”) makes it more likely that the text will be moderated. Start with the sliders at “Low” in order to effectively demo sending a message that is moderated, without having to be too crude.

As with the Word List moderation, you can choose to mask or block the message, and to send it to a banned channel. Start with choosing the “Mask” and “Route to banned channel” options to demo Word List based Text Moderation. Try other options if you prefer. 

Once it is configured to your liking, hit the Save button to save the moderation chat settings. This can take a moment or two to complete. After the Moderation Dashboard indicates that the moderation function is deployed successfully, go back to the Moderated Chat App Sample and try sending a message that should be moderated.

An important note about masking behavior with Automatic Detection Text Moderation: In Word List based masking behavior, only the offensive word is masked. When third-party text moderation is enabled, the entire message is masked. The third-party API does not return details about individual words in the message.

Configuring image moderation

To configure Image Moderation, expand the Settings menu on the left-hand navigation and select Image Moderation.

SightEngine Image Moderation can scan images for inappropriate content. Using SightEngine requires a SightEngine account. Follow the Image Moderation Workflow instructions to configure a workflow where you can specify the type of content you want to moderate for.

When Text Moderation has already been configured in the Moderation Dashboard, image moderation inherits the channel pattern specified there. If no text moderation is enabled, you can specify the channel pattern on the Image. 

You will need to provide your SightEngine API User, API Secret, and Workflow ID.

After setting the channel pattern and SightEngine.ai API Keys you can adjust the Risk Factor for the moderation that SightEngine performs. As with Automatic Text Moderation, you can choose to send the message to a banned channel. 

Once it is configured to your liking, hit the Save button to save the moderation settings. This can take a moment or two to complete. After the Moderation Dashboard indicates that the moderation function is deployed successfully, go back to the Moderated Chat App Sample and try sending a message that should be moderated.

Moderated channel viewer

The Moderation Dashboard has a “Channel View” that allows the Administrator or another stream moderator to see live streaming message activity and chat logs in the Moderated app. Select the “Channels” option in the left hand navigation within the Moderation Dashboard to see a list of all the channels that the Moderated Chat App Sample has created channel metadata for. You can edit the channel metadata here by hovering over a channel and clicking on the pencil or trash icon to edit or delete the channel metadata.

Choose one of the channels that you see available in the Moderated Chat App Sample. You can also refresh the Moderated Chat App Sample to get a new user with different channel permissions. There is an option to search for a channel by name within the Moderation Dashboard.

To try out the Moderated Channel Viewer go to the Moderated Chat App Sample and send a message that you think should not be moderated, such as ‘Hello world!’. You should see this message show up in the Moderation Dashboard’s Channel Viewer.

Now go to the Moderated Chat App Sample and send a message that you think should be moderated, such as ‘I will kill you!’ You should see the masked message show up in both the Moderated Chat App Sample and the Moderation Dashboard’s Channel Viewer.

The dashboard’s channel viewer allows an administrator to toggle to the banned view to see the original message if you have configured your Text Moderation to send the blocked message to the banned channel. Hovering over this message provides some of the details returned by the Tisane.ai API.

After-the-fact text moderation

The Moderation Dashboard’s channel view also allows an administrator to delete or modify a user’s messages that have already been sent. You can use this feature to enforce community guidelines, remove hate speech, or moderate messages that don’t follow your own chat rules. Click on the message that you wish to moderate and then click on the pencil or trash icon to edit or delete the message. Choosing to edit the message shows a message composition window at the bottom of the chat panel.

Edit the message and press the send button and the Moderation Dashboard adds a Message Action to the message. The Moderated Chat App Sample is registered to handle these events will be notified and will update the UI accordingly.

Blocking and muting users

An administrator may wish to block or mute a user from a channel. This can be done from the Members View of the Channel Viewer. A tip to find your user (the one you see in the Moderated Chat App Sample) is to scroll through looking at the ones who have the green online icon next to their names.

Once you have selected a user you will see a GUI that has icons to mute or block. Selecting either of these, will update the object metadata associated with the user. The Moderated Chat App Sample is registered to handle these events will be notified and will update the UI accordingly.

Changing user metadata in the Dashboard

The Moderation Dashboard includes a User View page for an administrator to create, update, and delete users. The User view also includes icons that allow the administrator to flag/unflag a user and ban a user.

Flagging a user has no impact on the Moderated Chat App Sample. However, the Moderated Chat App Sample does allow users to flag other users. This causes the user to appear flagged in the Moderation Dashboard. The User view in the Moderation Dashboard allows filtering for flagged users. Hovering over the flag icon lets the administrator see any details added by the user that flagged.

Chat moderation next steps

In this tutorial, we covered how to start the Moderated Chat App Sample locally and how to use it to demonstrate the new features in the Moderated Dashboard. The features we covered here included how to configure Automatic Text Moderation, how to configure Image Moderation, and how to use the Moderated Channel Viewer. 

Are you ready to integrate the Moderation Dashboard into your project or do you want to learn more about the Moderation Dashboard in depth? Get in touch with our sales team and we can chat about what we can do for you.