Connect with random people instantly. Find them in the world’s largest group chat. The “Omegle” for people who don’t want to get creepy messages from old people and weird strangers! Free private chat forever, and meet people along the way. Zonish is also great for you to contact your friends anonymously. Zonish.com is also the best way to contact your friends anonymously, without your parents finding out! Our site is pretty much a way for you to launder your chats. Statistically, the chance of someone finding your chat is impossible, unless they are with you in real life, looking at your computer or device. We hope to make the internet a safer and more secure place for everyone to chat on, without the risks of being spied on, by anyone untrustworthy. Talking to strangers online can be sketchy, so if you are ever talking to someone you don’t feel comfortable with, please just leave the chat. If you are reading this, please let us know if you have any ideas, questions, or concerns for our website here: [email protected] Thanks for reading and enjoy chatting!

Eliminate bias and enhance fairness in AI models using Cortex Certifai

Summary

In this code pattern, learn how to use the Cortex Certifai Toolkit to create scans to evaluate the performance of multiple predictive models using IBM Watson Studio.

Description

Explainability of AI models is a difficult task that is made simpler by Cortex Certifai. The Cortex Certifai Tookit evaluates AI models for robustness, fairness, and explainability, and allows users to compare different models or model versions for these qualities. Certifai can be applied to any black-box model including machine learning models and predictive models, and works with a variety of input data sets.

Data scientists can create model scan definitions, which are comprised of trained models that you want to evaluate for the following parameters:

  • Performance metric (for example, Accuracy)
  • Robustness: How the model generalizes on new data
  • Fairness by group, which measures the bias in the data
  • Explainability, which measures the explanations provided for each model
  • Explanations, which display the change that must occur in a data set with given restrictions to obtain a different outcome

Business decision makers can view the evaluation comparison through visualizations and scores to select the best models for business goals and to identify whether models meet thresholds for robustness, fairness, and explainability. Data scientists can use the evaluation results for analysis to provide more trustworthy AI models.

This code pattern demonstrates how to use the Certifai Toolkit to create scans to evaluate the performance of multiple predictive models using the IBM Watson Studio platform.

Flow

flow

  1. Log in to IBM Watson Studio powered by Spark, initiate IBM Cloud Object Storage, and create a project.
  2. Upload the .csv data file to IBM Cloud Object Storage.
  3. Load the data file in the Watson Studio notebook.
  4. Install the Cortex Certifai Toolkit in the Watson Studio notebook.
  5. Get visualization for explainability and interpretability of the AI model for the three different types of users.

Instructions

Find the detailed steps in the README file. Those steps explain how to:

  1. Create an account with IBM Cloud.
  2. Create a new Watson Studio project.
  3. Add data.
  4. Create the notebook.
  5. Insert the data as DataFrame.
  6. Run the notebook.
  7. Analyze the results.