From 2c841d6fdc6d7d5efab7eaf99eb4d153fadd9e24 Mon Sep 17 00:00:00 2001 From: jlinn3 <121886360+jlinn3@users.noreply.github.com> Date: Wed, 25 Jan 2023 00:18:12 -0500 Subject: [PATCH 01/10] Create jps-demystifying-algorithmic-fairness.md --- jps-demystifying-algorithmic-fairness.md | 1 + 1 file changed, 1 insertion(+) create mode 100644 jps-demystifying-algorithmic-fairness.md diff --git a/jps-demystifying-algorithmic-fairness.md b/jps-demystifying-algorithmic-fairness.md new file mode 100644 index 000000000..754dc3b17 --- /dev/null +++ b/jps-demystifying-algorithmic-fairness.md @@ -0,0 +1 @@ +This is me figuring this out From f3cf4f3688c49923e28a8ecd594b460d7aa8dc11 Mon Sep 17 00:00:00 2001 From: jlinn3 <121886360+jlinn3@users.noreply.github.com> Date: Thu, 26 Jan 2023 16:09:51 -0500 Subject: [PATCH 02/10] Delete jps-demystifying-algorithmic-fairness.md --- jps-demystifying-algorithmic-fairness.md | 1 - 1 file changed, 1 deletion(-) delete mode 100644 jps-demystifying-algorithmic-fairness.md diff --git a/jps-demystifying-algorithmic-fairness.md b/jps-demystifying-algorithmic-fairness.md deleted file mode 100644 index 754dc3b17..000000000 --- a/jps-demystifying-algorithmic-fairness.md +++ /dev/null @@ -1 +0,0 @@ -This is me figuring this out From ce1984b94376d33c866dcf2a0ec2879635eb09a4 Mon Sep 17 00:00:00 2001 From: jlinn3 <121886360+jlinn3@users.noreply.github.com> Date: Thu, 26 Jan 2023 20:05:35 -0500 Subject: [PATCH 03/10] Create demystifying_algorithmic_fairness.md --- .../demystifying_algorithmic_fairness.md | 161 ++++++++++++++++++ 1 file changed, 161 insertions(+) create mode 100644 demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md diff --git a/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md b/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md new file mode 100644 index 000000000..b3dd7550b --- /dev/null +++ b/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md @@ -0,0 +1,161 @@ + + +# Demystifying Algorithmic Fairness + +
+ +## Overview +@comment + +**Is this module right for me?** +@long_description + +**Estimated time to completion:** +@estimated_time + +**Pre-requisites** + +None. This lesson is appropriate for beginners looking to learn more about the ethical problems arising in Data Science. Experience with basic Data Science terminology is helpful but it is not required. + +**Learning Objectives** + +@learning_objectives + +
+ +## Bias in Machine Learning + +Although scientists before believed machine learning was an ethical, nonbiased mechanism to approach different problems, the truth is that bias still exists in algorithms. After all, humans create algorithms. Whether biases are enforced intentionally or without knowing, biases continue to exist. + +Below is a short video created by RSA with Cathy O'Neil voicing how discirmination in algorithm is very much present. +!?[This video is hosted on youtube.]https://www.youtube.com/watch?v=heQzqX35c9A + +True or False: There are over more than 100 human biases recorded that can potentially impact algorithms. + +[(X)] TRUE +[( )] FALSE + + +## Types of Bias in Machine Learning + +
+Warning!
+ +There are more than 100 human biases. These biases listed are only the tip of the iceberg. +
+ +* Reporting Bias: Algorithms that relied on data sets can have an issue in the amount of times a particular instance is reported. This is an issue within frequency. As people often document events that are unusual or rare, the data set may lack how frequent "ordinary" events go. +* Implicit Bias: These are assumptions based on a programmer's own perspective and personal experiences that may not necessarily be true for everyone. A programmer can falsely attribute assumptions to their algorithm, therefore causing a chain reaction. +* Confirmation Bias: Developers can classify data in ways that will provoke an algorithm to prove their existing belief. +* Hidden Bias: These are underlying stereotypes that are attributed to a group of people unconciously. + +## Examples of Bias + +There are several examples of machine learning impacting real people. Below are examples briefly outlined; + +* In 2019, a [Science](https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/) article found evidence of racial bias in commercial algorithms used by the U.S. health care system. This algorithm falsely determined Black patients were healthier than equally sick White patients. The effects of this was in both the care they recieved and their financia aid. +* COMPAS, known as the Correctional Offender Management Profiling for Alternative Sanctions, was an algorithm used to determine the likelihood of a criminal reoffending. An article published by [ProPublica](https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm) led to further analysis of the algorithm, which argued Black defendants were "twice as likely" as white defendants to be classified as being of higher risk of reoffending. This led to a dispute between the publication and Equivant- the company responsible for the software. +* A much different example shows an action that can cause previous held biases to disrupt the status quote. According to a [SFGATE] (https://www.sfgate.com/news/article/sanas-startup-creates-american-voice-17382771.php) article, Sanas is a startup aiming to make call center workers sound "American" by hiding their accent. This idea as a based on Sanas assumption that callers will be nicer to hearing a "White" voice. While Sanas brags about how their startup will "bring millions of jobs to the Philippines, millions of jobs to India", many criticize the band-aid approach Sanas took to covering actual issues in call centers- such as low pay, little to no support, and long hours. Others argued the approach dehumanized the workers, though Sanas is still continoung with their business plan. +* And these are just some of the overwhelming amount of biases found in algorithms. + +Below is a video that further provides examples visually. It contains a list of examples regarding data and bias, while also addressing the issue and what algorithmic fairness hopes to achieve. + +!?[This video is hosted on youtube.](https://www.youtube.com/watch?v=gV0_raKR2UQ) + + +## What is Algorithmic Fairness? + +Algorithmic Fairness is described as a field of research dedicated to understanding biases such as those outlined in the previous section. Described as being an ethical way of approaching biases within machine learning, researchers aim to find ways to correct these biases. Of course, there is a high amount of complexity within this issue as a whole, and one universal clear policy seems unlikely to be attained any time soon. Although the field of Algorithmic Fairness is fairly new and is everchanging, learning about its core goals and its attempts is vital to better analyze how intertwined ethics can be in Data Science. + + +
+Important note
+ +There are differing views in how Algorithmic Fairness can impact research and more- whether for percieved good or bad. Reading these materials can help jumpstart uncomfortable conversations and acknowledge truths. While this module aims to explain the field, its relevancy, and its potential future, the actions you take are ultimately up to you. However, this can help you understand the impact your actions can take and see the impacts it will continue to take. + +
+ + +## The Goals of Algorithmic Fairness + +Accoridng to an article published on [towardsdatascience](https://towardsdatascience.com/what-is-algorithm-fairness-3182e161cf9f), some of the goals of algorithmic fairness are as below; + +* Finding a definition of fairness +* Finding a way to appropriately measure fairness +* Finding ways to properly inform programmers/developers, companies, researchers, and more. +* Developing ethical ways to collect data that will be interpreted as fair. + +## The Future of Algorithmic Fairness + +
+A little encouragement...
+ +Becoming overwhelmed or feelings of impotence can arise from looking into issues within algorithms. This is a topic that can be uncomfortable to many and even new to some. However, there is a lot of work that can be done to correct biases in algorithms, and education is one of the first steps to understanding the complexity of this issue. Ethics as a whole can be scary, but the future of Data Science is still bright. There is so much that can be done and so much that is being done as you finish this module. + +The future of Algorithmic Fairness relies on the willingness for those in and out of the field to adapt and learn. This is easier said than done, as evident by articles that have risen in popularity to list the cons of believing algorithms can ever be fair and the articles condemning them in response. However, there is a lot of work being done that can help the future of algorithms and machine learning. Below are just a few examples of people and projects advancing alogorithmic fairness: + +* Canada CIFAR AI Chair [Dhanya Sridhar](https://cifar.ca/cifarnews/2022/09/12/believe-the-impossible-the-future-of-fairness-in-ai/) hopes to develop methods where machine learning can draw from "stable" and "casual" information. She plans on finding ethical ways to incorporate AI into decision making by forcing AI to focus on the fairer and newer conclusions, rather than producing outcomes based on past assumptions. +* Individuals like Matthew Finney, a data scientist researching the advancement of algorithmic bias at Harvard, look to define and measure algorithmic bias while advocating for more data scientists of color. +* Groups like the Algorithmic Fairness Opacity Group, or the AFOG, became established to bring together different perspectives into fixing the issues of bias in algorithms. +* There are attempts at raising awareness of the harm biases can cause. This is evident in professional seminars, online lessons, and various scientific articles. +* There are different ways being brainstormed to tackle this issue. One solution is to retrain algorithms every so often with fresh data. Of course, these possible solutions need to be tested. + +
+ +## Additional Resources + +The last section of the module content should be a list of additional resources, both ours and outside sources, including links to other modules that build on this content or are otherwise related. + +For more information on biases, [Google](https://developers.google.com/machine-learning/crash-course/fairness/types-of-bias) has provided a crash course lesson with examples. + +For more information on algorithmic fairness and possible solutions, this article published on [TowardsDataScience](https://towardsdatascience.com/what-is-algorithm-fairness-3182e161cf9f) covers some of it. + +This [video](https://youtu.be/WNvQG2WqJG0) posted on YouTube comes hand in hand with the previous article's content. + + +## Feedback + +In the beginning, we stated some goals. + +**Learning Objectives:** + +@learning_objectives + +We ask you to fill out a brief (5 minutes or less) survey to let us know: + +* If we achieved the learning objectives +* If the module difficulty was appropriate +* If we gave you the experience you expected + +We gather this information in order to iteratively improve our work. Thank you in advance for filling out [our brief survey](https://redcap.chop.edu/surveys/?s=KHTXCXJJ93&module_name=%22Module+Template%22)! From 4d69fcc1b879006d4a69231e780be9a92e42678b Mon Sep 17 00:00:00 2001 From: jlinn3 <121886360+jlinn3@users.noreply.github.com> Date: Thu, 26 Jan 2023 20:08:36 -0500 Subject: [PATCH 04/10] Updated article link in Examples section --- .../demystifying_algorithmic_fairness.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md b/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md index b3dd7550b..961b874c3 100644 --- a/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md +++ b/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md @@ -86,7 +86,7 @@ There are several examples of machine learning impacting real people. Below are * In 2019, a [Science](https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/) article found evidence of racial bias in commercial algorithms used by the U.S. health care system. This algorithm falsely determined Black patients were healthier than equally sick White patients. The effects of this was in both the care they recieved and their financia aid. * COMPAS, known as the Correctional Offender Management Profiling for Alternative Sanctions, was an algorithm used to determine the likelihood of a criminal reoffending. An article published by [ProPublica](https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm) led to further analysis of the algorithm, which argued Black defendants were "twice as likely" as white defendants to be classified as being of higher risk of reoffending. This led to a dispute between the publication and Equivant- the company responsible for the software. -* A much different example shows an action that can cause previous held biases to disrupt the status quote. According to a [SFGATE] (https://www.sfgate.com/news/article/sanas-startup-creates-american-voice-17382771.php) article, Sanas is a startup aiming to make call center workers sound "American" by hiding their accent. This idea as a based on Sanas assumption that callers will be nicer to hearing a "White" voice. While Sanas brags about how their startup will "bring millions of jobs to the Philippines, millions of jobs to India", many criticize the band-aid approach Sanas took to covering actual issues in call centers- such as low pay, little to no support, and long hours. Others argued the approach dehumanized the workers, though Sanas is still continoung with their business plan. +* A much different example shows an action that can cause previous held biases to disrupt the status quote. According to a [SFGATE](https://www.sfgate.com/news/article/sanas-startup-creates-american-voice-17382771.php) article, Sanas is a startup aiming to make call center workers sound "American" by hiding their accent. This idea as a based on Sanas assumption that callers will be nicer to hearing a "White" voice. While Sanas brags about how their startup will "bring millions of jobs to the Philippines, millions of jobs to India", many criticize the band-aid approach Sanas took to covering actual issues in call centers- such as low pay, little to no support, and long hours. Others argued the approach dehumanized the workers, though Sanas is still continoung with their business plan. * And these are just some of the overwhelming amount of biases found in algorithms. Below is a video that further provides examples visually. It contains a list of examples regarding data and bias, while also addressing the issue and what algorithmic fairness hopes to achieve. From e46a4c3cc8fd8ac7c506b8eeaf7931f42491ab32 Mon Sep 17 00:00:00 2001 From: jlinn3 <121886360+jlinn3@users.noreply.github.com> Date: Thu, 26 Jan 2023 20:14:39 -0500 Subject: [PATCH 05/10] updated youtube links --- .../demystifying_algorithmic_fairness.md | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md b/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md index 961b874c3..660f0f71c 100644 --- a/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md +++ b/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md @@ -58,8 +58,8 @@ None. This lesson is appropriate for beginners looking to learn more about the e Although scientists before believed machine learning was an ethical, nonbiased mechanism to approach different problems, the truth is that bias still exists in algorithms. After all, humans create algorithms. Whether biases are enforced intentionally or without knowing, biases continue to exist. -Below is a short video created by RSA with Cathy O'Neil voicing how discirmination in algorithm is very much present. -!?[This video is hosted on youtube.]https://www.youtube.com/watch?v=heQzqX35c9A +A [video](https://www.youtube.com/watch?v=heQzqX35c9A) created by RSA with Cathy O'Neil warns the dangers hidden on algorithms and asks for viewers to begin questioning data. + True or False: There are over more than 100 human biases recorded that can potentially impact algorithms. @@ -89,9 +89,7 @@ There are several examples of machine learning impacting real people. Below are * A much different example shows an action that can cause previous held biases to disrupt the status quote. According to a [SFGATE](https://www.sfgate.com/news/article/sanas-startup-creates-american-voice-17382771.php) article, Sanas is a startup aiming to make call center workers sound "American" by hiding their accent. This idea as a based on Sanas assumption that callers will be nicer to hearing a "White" voice. While Sanas brags about how their startup will "bring millions of jobs to the Philippines, millions of jobs to India", many criticize the band-aid approach Sanas took to covering actual issues in call centers- such as low pay, little to no support, and long hours. Others argued the approach dehumanized the workers, though Sanas is still continoung with their business plan. * And these are just some of the overwhelming amount of biases found in algorithms. -Below is a video that further provides examples visually. It contains a list of examples regarding data and bias, while also addressing the issue and what algorithmic fairness hopes to achieve. - -!?[This video is hosted on youtube.](https://www.youtube.com/watch?v=gV0_raKR2UQ) +[This](https://www.youtube.com/watch?v=gV0_raKR2UQ) video further provides examples visually. It contains a list of examples regarding data and bias, while also addressing the issue and what algorithmic fairness hopes to achieve. ## What is Algorithmic Fairness? @@ -141,7 +139,7 @@ For more information on biases, [Google](https://developers.google.com/machine-l For more information on algorithmic fairness and possible solutions, this article published on [TowardsDataScience](https://towardsdatascience.com/what-is-algorithm-fairness-3182e161cf9f) covers some of it. -This [video](https://youtu.be/WNvQG2WqJG0) posted on YouTube comes hand in hand with the previous article's content. +This [video](https://youtu.be/WNvQG2WqJG0) posted on YouTube comes hand in hand with the previous article's content. ## Feedback From 96317c47293b30a5861c914b8c31c5637370a5ad Mon Sep 17 00:00:00 2001 From: jlinn3 <121886360+jlinn3@users.noreply.github.com> Date: Thu, 26 Jan 2023 20:23:15 -0500 Subject: [PATCH 06/10] Create demystifying_algorithmic_fairnesss.md --- .../demystifying_algorithmic_fairnesss.md | 162 ++++++++++++++++++ 1 file changed, 162 insertions(+) create mode 100644 demystifying_algorithmic_fairness/demystifying_algorithmic_fairnesss.md diff --git a/demystifying_algorithmic_fairness/demystifying_algorithmic_fairnesss.md b/demystifying_algorithmic_fairness/demystifying_algorithmic_fairnesss.md new file mode 100644 index 000000000..1f5c90c34 --- /dev/null +++ b/demystifying_algorithmic_fairness/demystifying_algorithmic_fairnesss.md @@ -0,0 +1,162 @@ + + +# Demystifying Algorithmic Fairness + +
+ +## Overview +@comment + +**Is this module right for me?** +@long_description + +**Estimated time to completion:** +@estimated_time + +**Pre-requisites** + +None. This lesson is appropriate for beginners looking to learn more about the ethical problems arising in Data Science. Experience with basic Data Science terminology is helpful but it is not required. + +**Learning Objectives** + +@learning_objectives + +
+ +## Bias in Machine Learning + +Although scientists before believed machine learning was an ethical, nonbiased mechanism to approach different problems, the truth is that bias still exists in algorithms. After all, humans create algorithms. Whether biases are enforced intentionally or without knowing, biases continue to exist. + +Below is a short video created by RSA with Cathy O'Neil narrating the dangers hidden with algorithms and machine learning. + +https://www.youtube.com/watch?v=heQzqX35c9A + +True or False: There are over more than 100 human biases recorded that can potentially impact algorithms. + +[(X)] TRUE +[( )] FALSE + + +## Types of Bias in Machine Learning + +
+Warning!
+ +There are more than 100 human biases. These biases listed are only the tip of the iceberg. +
+ +* Reporting Bias: Algorithms that relied on data sets can have an issue in the amount of times a particular instance is reported. This is an issue within frequency. As people often document events that are unusual or rare, the data set may lack how frequent "ordinary" events go. +* Implicit Bias: These are assumptions based on a programmer's own perspective and personal experiences that may not necessarily be true for everyone. A programmer can falsely attribute assumptions to their algorithm, therefore causing a chain reaction. +* Confirmation Bias: Developers can classify data in ways that will provoke an algorithm to prove their existing belief. +* Hidden Bias: These are underlying stereotypes that are attributed to a group of people unconciously. + +## Examples of Bias + +There are several examples of machine learning impacting real people. Below are examples briefly outlined; + +* In 2019, a [Science](https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/) article found evidence of racial bias in commercial algorithms used by the U.S. health care system. This algorithm falsely determined Black patients were healthier than equally sick White patients. The effects of this was in both the care they recieved and their financia aid. +* COMPAS, known as the Correctional Offender Management Profiling for Alternative Sanctions, was an algorithm used to determine the likelihood of a criminal reoffending. An article published by [ProPublica](https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm) led to further analysis of the algorithm, which argued Black defendants were "twice as likely" as white defendants to be classified as being of higher risk of reoffending. This led to a dispute between the publication and Equivant- the company responsible for the software. +* A much different example shows an action that can cause previous held biases to disrupt the status quote. According to a [SFGATE](https://www.sfgate.com/news/article/sanas-startup-creates-american-voice-17382771.php) article, Sanas is a startup aiming to make call center workers sound "American" by hiding their accent. This idea as a based on Sanas assumption that callers will be nicer to hearing a "White" voice. While Sanas brags about how their startup will "bring millions of jobs to the Philippines, millions of jobs to India", many criticize the band-aid approach Sanas took to covering actual issues in call centers- such as low pay, little to no support, and long hours. Others argued the approach dehumanized the workers, though Sanas is still continoung with their business plan. +* And these are just some of the overwhelming amount of biases found in algorithms. + +Below is a video that further provides examples visually. It contains a list of examples regarding data and bias, while also addressing the issue and what algorithmic fairness hopes to achieve. + +(https://www.youtube.com/watch?v=gV0_raKR2UQ) + + +## What is Algorithmic Fairness? + +Algorithmic Fairness is described as a field of research dedicated to understanding biases such as those outlined in the previous section. Described as being an ethical way of approaching biases within machine learning, researchers aim to find ways to correct these biases. Of course, there is a high amount of complexity within this issue as a whole, and one universal clear policy seems unlikely to be attained any time soon. Although the field of Algorithmic Fairness is fairly new and is everchanging, learning about its core goals and its attempts is vital to better analyze how intertwined ethics can be in Data Science. + + +
+Important note
+ +There are differing views in how Algorithmic Fairness can impact research and more- whether for percieved good or bad. Reading these materials can help jumpstart uncomfortable conversations and acknowledge truths. While this module aims to explain the field, its relevancy, and its potential future, the actions you take are ultimately up to you. However, this can help you understand the impact your actions can take and see the impacts it will continue to take. + +
+ + +## The Goals of Algorithmic Fairness + +Accoridng to an article published on [towardsdatascience](https://towardsdatascience.com/what-is-algorithm-fairness-3182e161cf9f), some of the goals of algorithmic fairness are as below; + +* Finding a definition of fairness +* Finding a way to appropriately measure fairness +* Finding ways to properly inform programmers/developers, companies, researchers, and more. +* Developing ethical ways to collect data that will be interpreted as fair. + +## The Future of Algorithmic Fairness + +
+A little encouragement...
+ +Becoming overwhelmed or feelings of impotence can arise from looking into issues within algorithms. This is a topic that can be uncomfortable to many and even new to some. However, there is a lot of work that can be done to correct biases in algorithms, and education is one of the first steps to understanding the complexity of this issue. Ethics as a whole can be scary, but the future of Data Science is still bright. There is so much that can be done and so much that is being done as you finish this module. +
+ +The future of Algorithmic Fairness relies on the willingness for those in and out of the field to adapt and learn. This is easier said than done, as evident by articles that have risen in popularity to list the cons of believing algorithms can ever be fair and the articles condemning them in response. However, there is a lot of work being done that can help the future of algorithms and machine learning. Below are just a few examples of people and projects advancing alogorithmic fairness: + +* Canada CIFAR AI Chair [Dhanya Sridhar](https://cifar.ca/cifarnews/2022/09/12/believe-the-impossible-the-future-of-fairness-in-ai/) hopes to develop methods where machine learning can draw from "stable" and "casual" information. She plans on finding ethical ways to incorporate AI into decision making by forcing AI to focus on the fairer and newer conclusions, rather than producing outcomes based on past assumptions. +* Individuals like Matthew Finney, a data scientist researching the advancement of algorithmic bias at Harvard, look to define and measure algorithmic bias while advocating for more data scientists of color. +* Groups like the Algorithmic Fairness Opacity Group, or the AFOG, became established to bring together different perspectives into fixing the issues of bias in algorithms. +* There are attempts at raising awareness of the harm biases can cause. This is evident in professional seminars, online lessons, and various scientific articles. +* There are different ways being brainstormed to tackle this issue. One solution is to retrain algorithms every so often with fresh data. Of course, these possible solutions need to be tested. + + +## Additional Resources + +The last section of the module content should be a list of additional resources, both ours and outside sources, including links to other modules that build on this content or are otherwise related. + +For more information on biases, [Google](https://developers.google.com/machine-learning/crash-course/fairness/types-of-bias) has provided a crash course lesson with examples. + +For more information on algorithmic fairness and possible solutions, this article published on [TowardsDataScience](https://towardsdatascience.com/what-is-algorithm-fairness-3182e161cf9f) covers some of it. + +This [video](https://youtu.be/WNvQG2WqJG0) posted on YouTube comes hand in hand with the previous article's content. + + +## Feedback + +In the beginning, we stated some goals. + +**Learning Objectives:** + +@learning_objectives + +We ask you to fill out a brief (5 minutes or less) survey to let us know: + +* If we achieved the learning objectives +* If the module difficulty was appropriate +* If we gave you the experience you expected + +We gather this information in order to iteratively improve our work. Thank you in advance for filling out [our brief survey](https://redcap.chop.edu/surveys/?s=KHTXCXJJ93&module_name=%22Module+Template%22)! From 31f20740adad5fc8a51fb5b7f698b46abf9ac2bb Mon Sep 17 00:00:00 2001 From: jlinn3 <121886360+jlinn3@users.noreply.github.com> Date: Thu, 26 Jan 2023 20:26:29 -0500 Subject: [PATCH 07/10] Delete demystifying_algorithmic_fairness.md --- .../demystifying_algorithmic_fairness.md | 159 ------------------ 1 file changed, 159 deletions(-) delete mode 100644 demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md diff --git a/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md b/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md deleted file mode 100644 index 660f0f71c..000000000 --- a/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md +++ /dev/null @@ -1,159 +0,0 @@ - - -# Demystifying Algorithmic Fairness - -
- -## Overview -@comment - -**Is this module right for me?** -@long_description - -**Estimated time to completion:** -@estimated_time - -**Pre-requisites** - -None. This lesson is appropriate for beginners looking to learn more about the ethical problems arising in Data Science. Experience with basic Data Science terminology is helpful but it is not required. - -**Learning Objectives** - -@learning_objectives - -
- -## Bias in Machine Learning - -Although scientists before believed machine learning was an ethical, nonbiased mechanism to approach different problems, the truth is that bias still exists in algorithms. After all, humans create algorithms. Whether biases are enforced intentionally or without knowing, biases continue to exist. - -A [video](https://www.youtube.com/watch?v=heQzqX35c9A) created by RSA with Cathy O'Neil warns the dangers hidden on algorithms and asks for viewers to begin questioning data. - - -True or False: There are over more than 100 human biases recorded that can potentially impact algorithms. - -[(X)] TRUE -[( )] FALSE - - -## Types of Bias in Machine Learning - -
-Warning!
- -There are more than 100 human biases. These biases listed are only the tip of the iceberg. -
- -* Reporting Bias: Algorithms that relied on data sets can have an issue in the amount of times a particular instance is reported. This is an issue within frequency. As people often document events that are unusual or rare, the data set may lack how frequent "ordinary" events go. -* Implicit Bias: These are assumptions based on a programmer's own perspective and personal experiences that may not necessarily be true for everyone. A programmer can falsely attribute assumptions to their algorithm, therefore causing a chain reaction. -* Confirmation Bias: Developers can classify data in ways that will provoke an algorithm to prove their existing belief. -* Hidden Bias: These are underlying stereotypes that are attributed to a group of people unconciously. - -## Examples of Bias - -There are several examples of machine learning impacting real people. Below are examples briefly outlined; - -* In 2019, a [Science](https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/) article found evidence of racial bias in commercial algorithms used by the U.S. health care system. This algorithm falsely determined Black patients were healthier than equally sick White patients. The effects of this was in both the care they recieved and their financia aid. -* COMPAS, known as the Correctional Offender Management Profiling for Alternative Sanctions, was an algorithm used to determine the likelihood of a criminal reoffending. An article published by [ProPublica](https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm) led to further analysis of the algorithm, which argued Black defendants were "twice as likely" as white defendants to be classified as being of higher risk of reoffending. This led to a dispute between the publication and Equivant- the company responsible for the software. -* A much different example shows an action that can cause previous held biases to disrupt the status quote. According to a [SFGATE](https://www.sfgate.com/news/article/sanas-startup-creates-american-voice-17382771.php) article, Sanas is a startup aiming to make call center workers sound "American" by hiding their accent. This idea as a based on Sanas assumption that callers will be nicer to hearing a "White" voice. While Sanas brags about how their startup will "bring millions of jobs to the Philippines, millions of jobs to India", many criticize the band-aid approach Sanas took to covering actual issues in call centers- such as low pay, little to no support, and long hours. Others argued the approach dehumanized the workers, though Sanas is still continoung with their business plan. -* And these are just some of the overwhelming amount of biases found in algorithms. - -[This](https://www.youtube.com/watch?v=gV0_raKR2UQ) video further provides examples visually. It contains a list of examples regarding data and bias, while also addressing the issue and what algorithmic fairness hopes to achieve. - - -## What is Algorithmic Fairness? - -Algorithmic Fairness is described as a field of research dedicated to understanding biases such as those outlined in the previous section. Described as being an ethical way of approaching biases within machine learning, researchers aim to find ways to correct these biases. Of course, there is a high amount of complexity within this issue as a whole, and one universal clear policy seems unlikely to be attained any time soon. Although the field of Algorithmic Fairness is fairly new and is everchanging, learning about its core goals and its attempts is vital to better analyze how intertwined ethics can be in Data Science. - - -
-Important note
- -There are differing views in how Algorithmic Fairness can impact research and more- whether for percieved good or bad. Reading these materials can help jumpstart uncomfortable conversations and acknowledge truths. While this module aims to explain the field, its relevancy, and its potential future, the actions you take are ultimately up to you. However, this can help you understand the impact your actions can take and see the impacts it will continue to take. - -
- - -## The Goals of Algorithmic Fairness - -Accoridng to an article published on [towardsdatascience](https://towardsdatascience.com/what-is-algorithm-fairness-3182e161cf9f), some of the goals of algorithmic fairness are as below; - -* Finding a definition of fairness -* Finding a way to appropriately measure fairness -* Finding ways to properly inform programmers/developers, companies, researchers, and more. -* Developing ethical ways to collect data that will be interpreted as fair. - -## The Future of Algorithmic Fairness - -
-A little encouragement...
- -Becoming overwhelmed or feelings of impotence can arise from looking into issues within algorithms. This is a topic that can be uncomfortable to many and even new to some. However, there is a lot of work that can be done to correct biases in algorithms, and education is one of the first steps to understanding the complexity of this issue. Ethics as a whole can be scary, but the future of Data Science is still bright. There is so much that can be done and so much that is being done as you finish this module. - -The future of Algorithmic Fairness relies on the willingness for those in and out of the field to adapt and learn. This is easier said than done, as evident by articles that have risen in popularity to list the cons of believing algorithms can ever be fair and the articles condemning them in response. However, there is a lot of work being done that can help the future of algorithms and machine learning. Below are just a few examples of people and projects advancing alogorithmic fairness: - -* Canada CIFAR AI Chair [Dhanya Sridhar](https://cifar.ca/cifarnews/2022/09/12/believe-the-impossible-the-future-of-fairness-in-ai/) hopes to develop methods where machine learning can draw from "stable" and "casual" information. She plans on finding ethical ways to incorporate AI into decision making by forcing AI to focus on the fairer and newer conclusions, rather than producing outcomes based on past assumptions. -* Individuals like Matthew Finney, a data scientist researching the advancement of algorithmic bias at Harvard, look to define and measure algorithmic bias while advocating for more data scientists of color. -* Groups like the Algorithmic Fairness Opacity Group, or the AFOG, became established to bring together different perspectives into fixing the issues of bias in algorithms. -* There are attempts at raising awareness of the harm biases can cause. This is evident in professional seminars, online lessons, and various scientific articles. -* There are different ways being brainstormed to tackle this issue. One solution is to retrain algorithms every so often with fresh data. Of course, these possible solutions need to be tested. - -
- -## Additional Resources - -The last section of the module content should be a list of additional resources, both ours and outside sources, including links to other modules that build on this content or are otherwise related. - -For more information on biases, [Google](https://developers.google.com/machine-learning/crash-course/fairness/types-of-bias) has provided a crash course lesson with examples. - -For more information on algorithmic fairness and possible solutions, this article published on [TowardsDataScience](https://towardsdatascience.com/what-is-algorithm-fairness-3182e161cf9f) covers some of it. - -This [video](https://youtu.be/WNvQG2WqJG0) posted on YouTube comes hand in hand with the previous article's content. - - -## Feedback - -In the beginning, we stated some goals. - -**Learning Objectives:** - -@learning_objectives - -We ask you to fill out a brief (5 minutes or less) survey to let us know: - -* If we achieved the learning objectives -* If the module difficulty was appropriate -* If we gave you the experience you expected - -We gather this information in order to iteratively improve our work. Thank you in advance for filling out [our brief survey](https://redcap.chop.edu/surveys/?s=KHTXCXJJ93&module_name=%22Module+Template%22)! From 05ac67df99531cf83c8dc1c367457cd12e2c65f3 Mon Sep 17 00:00:00 2001 From: jlinn3 <121886360+jlinn3@users.noreply.github.com> Date: Thu, 26 Jan 2023 20:26:38 -0500 Subject: [PATCH 08/10] Delete demystifying_algorithmic_fairnesss.md --- .../demystifying_algorithmic_fairnesss.md | 162 ------------------ 1 file changed, 162 deletions(-) delete mode 100644 demystifying_algorithmic_fairness/demystifying_algorithmic_fairnesss.md diff --git a/demystifying_algorithmic_fairness/demystifying_algorithmic_fairnesss.md b/demystifying_algorithmic_fairness/demystifying_algorithmic_fairnesss.md deleted file mode 100644 index 1f5c90c34..000000000 --- a/demystifying_algorithmic_fairness/demystifying_algorithmic_fairnesss.md +++ /dev/null @@ -1,162 +0,0 @@ - - -# Demystifying Algorithmic Fairness - -
- -## Overview -@comment - -**Is this module right for me?** -@long_description - -**Estimated time to completion:** -@estimated_time - -**Pre-requisites** - -None. This lesson is appropriate for beginners looking to learn more about the ethical problems arising in Data Science. Experience with basic Data Science terminology is helpful but it is not required. - -**Learning Objectives** - -@learning_objectives - -
- -## Bias in Machine Learning - -Although scientists before believed machine learning was an ethical, nonbiased mechanism to approach different problems, the truth is that bias still exists in algorithms. After all, humans create algorithms. Whether biases are enforced intentionally or without knowing, biases continue to exist. - -Below is a short video created by RSA with Cathy O'Neil narrating the dangers hidden with algorithms and machine learning. - -https://www.youtube.com/watch?v=heQzqX35c9A - -True or False: There are over more than 100 human biases recorded that can potentially impact algorithms. - -[(X)] TRUE -[( )] FALSE - - -## Types of Bias in Machine Learning - -
-Warning!
- -There are more than 100 human biases. These biases listed are only the tip of the iceberg. -
- -* Reporting Bias: Algorithms that relied on data sets can have an issue in the amount of times a particular instance is reported. This is an issue within frequency. As people often document events that are unusual or rare, the data set may lack how frequent "ordinary" events go. -* Implicit Bias: These are assumptions based on a programmer's own perspective and personal experiences that may not necessarily be true for everyone. A programmer can falsely attribute assumptions to their algorithm, therefore causing a chain reaction. -* Confirmation Bias: Developers can classify data in ways that will provoke an algorithm to prove their existing belief. -* Hidden Bias: These are underlying stereotypes that are attributed to a group of people unconciously. - -## Examples of Bias - -There are several examples of machine learning impacting real people. Below are examples briefly outlined; - -* In 2019, a [Science](https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/) article found evidence of racial bias in commercial algorithms used by the U.S. health care system. This algorithm falsely determined Black patients were healthier than equally sick White patients. The effects of this was in both the care they recieved and their financia aid. -* COMPAS, known as the Correctional Offender Management Profiling for Alternative Sanctions, was an algorithm used to determine the likelihood of a criminal reoffending. An article published by [ProPublica](https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm) led to further analysis of the algorithm, which argued Black defendants were "twice as likely" as white defendants to be classified as being of higher risk of reoffending. This led to a dispute between the publication and Equivant- the company responsible for the software. -* A much different example shows an action that can cause previous held biases to disrupt the status quote. According to a [SFGATE](https://www.sfgate.com/news/article/sanas-startup-creates-american-voice-17382771.php) article, Sanas is a startup aiming to make call center workers sound "American" by hiding their accent. This idea as a based on Sanas assumption that callers will be nicer to hearing a "White" voice. While Sanas brags about how their startup will "bring millions of jobs to the Philippines, millions of jobs to India", many criticize the band-aid approach Sanas took to covering actual issues in call centers- such as low pay, little to no support, and long hours. Others argued the approach dehumanized the workers, though Sanas is still continoung with their business plan. -* And these are just some of the overwhelming amount of biases found in algorithms. - -Below is a video that further provides examples visually. It contains a list of examples regarding data and bias, while also addressing the issue and what algorithmic fairness hopes to achieve. - -(https://www.youtube.com/watch?v=gV0_raKR2UQ) - - -## What is Algorithmic Fairness? - -Algorithmic Fairness is described as a field of research dedicated to understanding biases such as those outlined in the previous section. Described as being an ethical way of approaching biases within machine learning, researchers aim to find ways to correct these biases. Of course, there is a high amount of complexity within this issue as a whole, and one universal clear policy seems unlikely to be attained any time soon. Although the field of Algorithmic Fairness is fairly new and is everchanging, learning about its core goals and its attempts is vital to better analyze how intertwined ethics can be in Data Science. - - -
-Important note
- -There are differing views in how Algorithmic Fairness can impact research and more- whether for percieved good or bad. Reading these materials can help jumpstart uncomfortable conversations and acknowledge truths. While this module aims to explain the field, its relevancy, and its potential future, the actions you take are ultimately up to you. However, this can help you understand the impact your actions can take and see the impacts it will continue to take. - -
- - -## The Goals of Algorithmic Fairness - -Accoridng to an article published on [towardsdatascience](https://towardsdatascience.com/what-is-algorithm-fairness-3182e161cf9f), some of the goals of algorithmic fairness are as below; - -* Finding a definition of fairness -* Finding a way to appropriately measure fairness -* Finding ways to properly inform programmers/developers, companies, researchers, and more. -* Developing ethical ways to collect data that will be interpreted as fair. - -## The Future of Algorithmic Fairness - -
-A little encouragement...
- -Becoming overwhelmed or feelings of impotence can arise from looking into issues within algorithms. This is a topic that can be uncomfortable to many and even new to some. However, there is a lot of work that can be done to correct biases in algorithms, and education is one of the first steps to understanding the complexity of this issue. Ethics as a whole can be scary, but the future of Data Science is still bright. There is so much that can be done and so much that is being done as you finish this module. -
- -The future of Algorithmic Fairness relies on the willingness for those in and out of the field to adapt and learn. This is easier said than done, as evident by articles that have risen in popularity to list the cons of believing algorithms can ever be fair and the articles condemning them in response. However, there is a lot of work being done that can help the future of algorithms and machine learning. Below are just a few examples of people and projects advancing alogorithmic fairness: - -* Canada CIFAR AI Chair [Dhanya Sridhar](https://cifar.ca/cifarnews/2022/09/12/believe-the-impossible-the-future-of-fairness-in-ai/) hopes to develop methods where machine learning can draw from "stable" and "casual" information. She plans on finding ethical ways to incorporate AI into decision making by forcing AI to focus on the fairer and newer conclusions, rather than producing outcomes based on past assumptions. -* Individuals like Matthew Finney, a data scientist researching the advancement of algorithmic bias at Harvard, look to define and measure algorithmic bias while advocating for more data scientists of color. -* Groups like the Algorithmic Fairness Opacity Group, or the AFOG, became established to bring together different perspectives into fixing the issues of bias in algorithms. -* There are attempts at raising awareness of the harm biases can cause. This is evident in professional seminars, online lessons, and various scientific articles. -* There are different ways being brainstormed to tackle this issue. One solution is to retrain algorithms every so often with fresh data. Of course, these possible solutions need to be tested. - - -## Additional Resources - -The last section of the module content should be a list of additional resources, both ours and outside sources, including links to other modules that build on this content or are otherwise related. - -For more information on biases, [Google](https://developers.google.com/machine-learning/crash-course/fairness/types-of-bias) has provided a crash course lesson with examples. - -For more information on algorithmic fairness and possible solutions, this article published on [TowardsDataScience](https://towardsdatascience.com/what-is-algorithm-fairness-3182e161cf9f) covers some of it. - -This [video](https://youtu.be/WNvQG2WqJG0) posted on YouTube comes hand in hand with the previous article's content. - - -## Feedback - -In the beginning, we stated some goals. - -**Learning Objectives:** - -@learning_objectives - -We ask you to fill out a brief (5 minutes or less) survey to let us know: - -* If we achieved the learning objectives -* If the module difficulty was appropriate -* If we gave you the experience you expected - -We gather this information in order to iteratively improve our work. Thank you in advance for filling out [our brief survey](https://redcap.chop.edu/surveys/?s=KHTXCXJJ93&module_name=%22Module+Template%22)! From 1f4bfad1648a93f9b074ef3983fd62c23a78e275 Mon Sep 17 00:00:00 2001 From: jlinn3 <121886360+jlinn3@users.noreply.github.com> Date: Thu, 26 Jan 2023 20:27:22 -0500 Subject: [PATCH 09/10] Create demystifying_algorithmic_fairness.md --- .../demystifying_algorithmic_fairness.md | 159 ++++++++++++++++++ 1 file changed, 159 insertions(+) create mode 100644 demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md diff --git a/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md b/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md new file mode 100644 index 000000000..51cd31ca8 --- /dev/null +++ b/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md @@ -0,0 +1,159 @@ + + +# Demystifying Algorithmic Fairness + +
+ +## Overview +@comment + +**Is this module right for me?** +@long_description + +**Estimated time to completion:** +@estimated_time + +**Pre-requisites** + +None. This lesson is appropriate for beginners looking to learn more about the ethical problems arising in Data Science. Experience with basic Data Science terminology is helpful but it is not required. + +**Learning Objectives** + +@learning_objectives + +
+ +## Bias in Machine Learning + +Although scientists before believed machine learning was an ethical, nonbiased mechanism to approach different problems, the truth is that bias still exists in algorithms. After all, humans create algorithms. Whether biases are enforced intentionally or without knowing, biases continue to exist. + +This short [video](https://www.youtube.com/watch?v=heQzqX35c9A) created by RSA with Cathy O'Neil narrates the dangers hidden with algorithms and machine learning. + + +True or False: There are over more than 100 human biases recorded that can potentially impact algorithms. + +[(X)] TRUE +[( )] FALSE + + +## Types of Bias in Machine Learning + +
+Warning!
+ +There are more than 100 human biases. These biases listed are only the tip of the iceberg. +
+ +* Reporting Bias: Algorithms that relied on data sets can have an issue in the amount of times a particular instance is reported. This is an issue within frequency. As people often document events that are unusual or rare, the data set may lack how frequent "ordinary" events go. +* Implicit Bias: These are assumptions based on a programmer's own perspective and personal experiences that may not necessarily be true for everyone. A programmer can falsely attribute assumptions to their algorithm, therefore causing a chain reaction. +* Confirmation Bias: Developers can classify data in ways that will provoke an algorithm to prove their existing belief. +* Hidden Bias: These are underlying stereotypes that are attributed to a group of people unconciously. + +## Examples of Bias + +There are several examples of machine learning impacting real people. Below are examples briefly outlined; + +* In 2019, a [Science](https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/) article found evidence of racial bias in commercial algorithms used by the U.S. health care system. This algorithm falsely determined Black patients were healthier than equally sick White patients. The effects of this was in both the care they recieved and their financia aid. +* COMPAS, known as the Correctional Offender Management Profiling for Alternative Sanctions, was an algorithm used to determine the likelihood of a criminal reoffending. An article published by [ProPublica](https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm) led to further analysis of the algorithm, which argued Black defendants were "twice as likely" as white defendants to be classified as being of higher risk of reoffending. This led to a dispute between the publication and Equivant- the company responsible for the software. +* A much different example shows an action that can cause previous held biases to disrupt the status quote. According to a [SFGATE](https://www.sfgate.com/news/article/sanas-startup-creates-american-voice-17382771.php) article, Sanas is a startup aiming to make call center workers sound "American" by hiding their accent. This idea as a based on Sanas assumption that callers will be nicer to hearing a "White" voice. While Sanas brags about how their startup will "bring millions of jobs to the Philippines, millions of jobs to India", many criticize the band-aid approach Sanas took to covering actual issues in call centers- such as low pay, little to no support, and long hours. Others argued the approach dehumanized the workers, though Sanas is still continuing with their business plan. +* And these are just some of the overwhelming amount of biases found in algorithms. + +This [video](https://www.youtube.com/watch?v=gV0_raKR2UQ) that further provides examples visually. It contains a list of examples regarding data and bias, while also addressing the issue and what algorithmic fairness hopes to achieve. + + +## What is Algorithmic Fairness? + +Algorithmic Fairness is described as a field of research dedicated to understanding biases such as those outlined in the previous section. Described as being an ethical way of approaching biases within machine learning, researchers aim to find ways to correct these biases. Of course, there is a high amount of complexity within this issue as a whole, and one universal clear policy seems unlikely to be attained any time soon. Although the field of Algorithmic Fairness is fairly new and is everchanging, learning about its core goals and its attempts is vital to better analyze how intertwined ethics can be in Data Science. + + +
+Important note
+ +There are differing views in how Algorithmic Fairness can impact research and more- whether for percieved good or bad. Reading these materials can help jumpstart uncomfortable conversations and acknowledge truths. While this module aims to explain the field, its relevancy, and its potential future, the actions you take are ultimately up to you. However, this can help you understand the impact your actions can take and see the impacts it will continue to take. + +
+ + +## The Goals of Algorithmic Fairness + +Accoridng to an article published on [towardsdatascience](https://towardsdatascience.com/what-is-algorithm-fairness-3182e161cf9f), some of the goals of algorithmic fairness are as below; + +* Finding a definition of fairness +* Finding a way to appropriately measure fairness +* Finding ways to properly inform programmers/developers, companies, researchers, and more. +* Developing ethical ways to collect data that will be interpreted as fair. + +## The Future of Algorithmic Fairness + +
+A little encouragement...
+ +Becoming overwhelmed or feelings of impotence can arise from looking into issues within algorithms. This is a topic that can be uncomfortable to many and even new to some. However, there is a lot of work that can be done to correct biases in algorithms, and education is one of the first steps to understanding the complexity of this issue. Ethics as a whole can be scary, but the future of Data Science is still bright. There is so much that can be done and so much that is being done as you finish this module. +
+ +The future of Algorithmic Fairness relies on the willingness for those in and out of the field to adapt and learn. This is easier said than done, as evident by articles that have risen in popularity to list the cons of believing algorithms can ever be fair and the articles condemning them in response. However, there is a lot of work being done that can help the future of algorithms and machine learning. Below are just a few examples of people and projects advancing alogorithmic fairness: + +* Canada CIFAR AI Chair [Dhanya Sridhar](https://cifar.ca/cifarnews/2022/09/12/believe-the-impossible-the-future-of-fairness-in-ai/) hopes to develop methods where machine learning can draw from "stable" and "casual" information. She plans on finding ethical ways to incorporate AI into decision making by forcing AI to focus on the fairer and newer conclusions, rather than producing outcomes based on past assumptions. +* Individuals like Matthew Finney, a data scientist researching the advancement of algorithmic bias at Harvard, look to define and measure algorithmic bias while advocating for more data scientists of color. +* Groups like the Algorithmic Fairness Opacity Group, or the AFOG, became established to bring together different perspectives into fixing the issues of bias in algorithms. +* There are attempts at raising awareness of the harm biases can cause. This is evident in professional seminars, online lessons, and various scientific articles. +* There are different ways being brainstormed to tackle this issue. One solution is to retrain algorithms every so often with fresh data. Of course, these possible solutions need to be tested. + + +## Additional Resources + +The last section of the module content should be a list of additional resources, both ours and outside sources, including links to other modules that build on this content or are otherwise related. + +For more information on biases, [Google](https://developers.google.com/machine-learning/crash-course/fairness/types-of-bias) has provided a crash course lesson with examples. + +For more information on algorithmic fairness and possible solutions, this article published on [TowardsDataScience](https://towardsdatascience.com/what-is-algorithm-fairness-3182e161cf9f) covers some of it. + +This [video](https://youtu.be/WNvQG2WqJG0) posted on YouTube comes hand in hand with the previous article's content. + + +## Feedback + +In the beginning, we stated some goals. + +**Learning Objectives:** + +@learning_objectives + +We ask you to fill out a brief (5 minutes or less) survey to let us know: + +* If we achieved the learning objectives +* If the module difficulty was appropriate +* If we gave you the experience you expected + +We gather this information in order to iteratively improve our work. Thank you in advance for filling out [our brief survey](https://redcap.chop.edu/surveys/?s=KHTXCXJJ93&module_name=%22Module+Template%22)! From 2af9cbd249dbf536d6d7d70720f14d5a292241dd Mon Sep 17 00:00:00 2001 From: jlinn3 <121886360+jlinn3@users.noreply.github.com> Date: Thu, 26 Jan 2023 20:29:29 -0500 Subject: [PATCH 10/10] Update demystifying_algorithmic_fairness.md --- .../demystifying_algorithmic_fairness.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md b/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md index 51cd31ca8..b0b604991 100644 --- a/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md +++ b/demystifying_algorithmic_fairness/demystifying_algorithmic_fairness.md @@ -58,7 +58,7 @@ None. This lesson is appropriate for beginners looking to learn more about the e Although scientists before believed machine learning was an ethical, nonbiased mechanism to approach different problems, the truth is that bias still exists in algorithms. After all, humans create algorithms. Whether biases are enforced intentionally or without knowing, biases continue to exist. -This short [video](https://www.youtube.com/watch?v=heQzqX35c9A) created by RSA with Cathy O'Neil narrates the dangers hidden with algorithms and machine learning. +This [video](https://www.youtube.com/watch?v=heQzqX35c9A) created by RSA with Cathy O'Neil, a data scientists studyng AI bias, narrates the dangers hidden with algorithms and machine learning. True or False: There are over more than 100 human biases recorded that can potentially impact algorithms.