Connect with random people instantly. Find them in the world’s largest group chat. The “Omegle” for people who don’t want to get creepy messages from old people and weird strangers! Free private chat forever, and meet people along the way. Zonish is also great for you to contact your friends anonymously. is also the best way to contact your friends anonymously, without your parents finding out! Our site is pretty much a way for you to launder your chats. Statistically, the chance of someone finding your chat is impossible, unless they are with you in real life, looking at your computer or device. We hope to make the internet a safer and more secure place for everyone to chat on, without the risks of being spied on, by anyone untrustworthy. Talking to strangers online can be sketchy, so if you are ever talking to someone you don’t feel comfortable with, please just leave the chat. If you are reading this, please let us know if you have any ideas, questions, or concerns for our website here: [email protected] Thanks for reading and enjoy chatting!

Identify and remove bias from AI models


How do you remove bias from the machine learning models and ensure that the predictions are fair? What are the three stages in which the bias mitigation solution can be applied? This code pattern answers these questions to help you make informed decision by consuming the results of predictive models.

If you have questions about this code pattern, ask them or look for answers in the associated forum.


Fairness in data and machine learning algorithms is critical to building safe and responsible AI systems. While accuracy is one metric for evaluating the accuracy of a machine learning model, fairness gives you a way to understand the practical implications of deploying the model in a real-world situation.

In this code pattern, you use a diabetes data set to predict whether a person is prone to have diabetes. You’ll use IBM Watson® Studio, IBM Cloud Object Storage, and the AI Fairness 360 Toolkit to create the data, apply the bias mitigation algorithm, then analyze the results.

After completing this code pattern, you understand how to:

  • Create a project using Watson Studio
  • Use the AI Fairness 360 Toolkit


Identify bias in AI models

  1. Log in to IBM Watson Studio powered by Spark, initiate IBM Cloud Object Storage, and create a project.
  2. Upload the .csv data file to IBM Cloud Object Storage.
  3. Load the data file in the Watson Studio notebook.
  4. Install the AI Fairness 360 Toolkit in the Watson Studio notebook.
  5. Analyze the results after applying the bias mitigation algorithm during pre-processing, in-processing, and post-processing stages.


Find the detailed steps for this pattern in the readme file. The steps will show you how to:

  1. Create an account with IBM Cloud.
  2. Create a new Watson Studio project.
  3. Add data.
  4. Create the notebook.
  5. Insert the data as DataFrame.
  6. Run the notebook.
  7. Analyze the results.

This code pattern is part of the The AI 360 Toolkit: AI models explained use case series, which helps stakeholders and developers to understand the AI model lifecycle completely and to help them make informed decisions.