We live in a digital world where we rely on our digital devices, such as smartphones and computers, for keeping ourselves informed, learning new concepts, planning our activities, and for taking crucial decisions in life. With the evolution of technology, the way we interact with our digital devices and computers has also undergone a rapid transformation 每 from the time when we were expected to key in a series of arcane commands for getting work done, to the age of Graphical User Interfaces (GUI) which needed several clicks of a mouse button to trigger some actions, to the current age of conversational interfaces 每 where the interaction happens through a natural language conversation using voice or text.?
AP
Conversational Interfaces (CI) are powered using Artificial Intelligence (AI) and offer a much more natural, intuitive, and human-like interface to delegate tasks and get work done. Not just that, they*re even capable of providing fun, entertainment, amusement, and a personal emotional connect. For example, I can ask a virtual assistant like Cortana/Google Now/Siri, which is available on my smartphone, to book a cab ride, search online for ※punjabi food recipes§, or to entertain me with some jokes and songs 每 all with the help of a voice or text-based conversation.?
Conversational Agents or Chat-Bots are also being deployed by various online business portals to provide a more personalised experience for their customers. In fact, some chat-bots, (such as Microsoft Xiaoice 每 a text-based chat-bot), are evolving beyond being assistants and note-takers to project a persona of their own. They have unique language characteristics, a sense of humor, and the ability to connect with users* emotions.?
While it*s certainly great news that we can converse with our machines much like we do with other humans, there are certain major concerns which still need to be addressed. As a social species, humans are endowed with two fundamental qualities 每 self-awareness and self-regulation. These qualities ensure that our behavior and actions in the society are ※appropriate§ i.e. satisfy the commonly accepted standards or expectations of society such as not being rude, discourteous, disrespectful towards any individual or group of individuals, avoiding behavior which may cause or is capable of causing harm to others, avoiding activities which are illegal under the laws of the country, and avoiding lewd or extremely violent behavior.
However, Conversational Agents and computers are still in their infancy with respect to fully comprehending their responses or suggestions before communicating them to the end user. In this sense, their faculties of self-awareness and self-regulation are not yet fully mature. Most conversational interfaces today are either programmed to give some pre-defined responses or learn to respond based on the training given in the form of message-response pairs from previous historical conversations culled from sources like Twitter and online forums such as Quora, Yahoo answers etc. Due to this, they may sometimes utter, suggest, or respond back with messages which are ※inappropriate§ in a given context.?
HKSILICON.COM
AI researchers are looking for automatic techniques for detecting such ※inappropriate§ or ※toxic§ content so that it could be employed by machines for effective self-regulation. This technology could also be used for moderating discussions and comments in many online forums and news sites where certain issues can rapidly dissolve into inappropriate abuse and hate commentary.
Researchers have come up with an automatic technique to identify and prune inappropriate query suggestions offered by Search Engines (SE). Query Auto Completion (QAC) or Auto Suggest is a popular SE feature, wherein, based on the first few characters entered, the SE guesses the most probable query completions matching the user intent and automatically offers them as suggestions in real time while the user is still typing. However, while retrieving potential completions from search logs, SEs oftentimes inadvertently suggest query completions which are inappropriate.
For example, if I enter the prefix ※bollywood movies are ※ on a popular search engine, some of the suggestions which I get are ※bollywood movies are better than hollywood§, ※bollywood movies are rubbish§, ※bollywood movies are stupid§ and ※bollywood movies are so bad§ out of which the last three suggestions may offend Bollywood movie fans. In other circumstances, they have the potential to sometimes offer inappropriate suggestions related to certain activities which may be illegal or inappropriate suggestions like suicide or self-harm related intent or intent for buying or selling of banned drugs or substances etc. Any service offering such inappropriate suggestions may risk being seen as endorsing those views thereby tarnishing the brand image or worse may lead to legal complications. Thus, it is imperative for SEs to understand and regulate the search suggestions it offers.?
Digital Trends
These problems fall broadly into the bucket that we will label as offensive or inappropriate content. This problem of detecting offensive queries is hard because search queries often contain spelling mistakes, asterisk characters in spellings, have loosely connected keywords without enough context, contain ambiguities of natural language and may also refer to other real world entities. The query ※lethal weapon suicide attempt§ may seem violent, offensive and hence ※inappropriate§ but is the name of a famous sound track in the pop album ※Lethal Weapon§. On the other hand, a query such as ※what to do when tweaking alone§, which appears like a clean query, the word ※tweaking§ has multiple meanings one of which also refers to ※the act of consuming an illegal drug§. Pattern based filtering techniques have severe limitations since they only work for the limited words defined in the rules and require constant manual intervention. So, a new strategy is required to automatically spot and filter such offensive query suggestions.
This technique being proposed by some researchers is based on a new field of computer science research known as Deep Learning (DL) 每 which aims to build machines that can process data and learn in the same way as our human brain does. DL essentially involves building artificial neural networks which are trained to mimic the behavior of the human brain. These networks can learn to represent and reason over the various inputs given to them such as words, images, sounds and so on. The figure below shows an illustration of an artificial neural network.
As shown in the illustration, these neural networks are composed of multiple layers, with input, output, and one or more hidden layers. These artificial neural networks can be trained to perform various tasks. Now for example, if a neural network is trained to understand a given image along with its various objects, the different hidden layers in the network tend to learn different aspects of the image. For instance, the first hidden layer of neurons may just identify edges in the image, with different angles of orientation and the next layer may learn to identify more complex objects using the previously learnt edges, such as detecting triangles, rectangles, and so on. Successive layers could build on previous layers to learn more sophisticated objects and features such as faces. The most interesting thing is 每 given training data, the model learns all of this on its own. In this case study researchers have proposed a novel architecture for training a network which effectively learns and models the semantic meaning of the given search query.?
Similar to the way we train our human brain about a concept by showing labeled examples, this new artificial neural network method is trained using several thousand real-world web search queries which were labeled as acceptable or inappropriate. After the model was trained, the researchers presented the network with a new set of 4000 real-world search queries from Bing. Results showed that the model achieved an accuracy of 92%. This was significantly better than the performance of pattern based or other state-of-the-art machine learning based techniques. For example, the model identifies 每 ※a**monkey§ (a curse word in urban slang), ※shake and bake meth instructions§ (instructions on making meth 每 a short form for a banned drug methamphetamine) as inappropriate suggestions despite having spell mistakes, asterisk symbols and other short forms. It also identifies that the query ※marvin gaye if I die tonight download§ is a clean suggestion although it contains the words ※die tonight§. ?The new approach isn*t all perfect, of course. Overall, this is interesting work which provides some important directions for future research.?
We are in the middle of an AI revolution where computers are promising to become the trusted lieutenants and adorable friends of humans. This true potential can be only realized if these bots, so to speak, become more aware of their actions and learn to restrain and regulate their automatic responses! An important motto to uphold - ?Thou Shalt Not Offend!
About the author:?Manoj Kumar Chinnakotla is a Senior Applied Scientist, Artificial Intelligence and Research at Microsoft India