A phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions or data in the AI learning process, including privileging one category over another for the intended function of the algorithm. For example, the use of underrepresented data of women or minority groups that could lead a skewed predictive AI algorithms.