Instead of the typical dystopian scene of flames, wastelands of shattered buildings, and robotic overlords policing the remaining humans, our actual dystopian future may be a workplace filled only with men named Jared who once played lacrosse in high school. This may sound far-fetched, but one resume-screening tool was found to be using an algorithm that concluded two factors were most determinative of job performance: the name Jared and a history of playing lacrosse in high school. The frailties of artificial intelligence (AI) systems in recruitment and hiring could transform our workforces in unpredictable ways. If employers blindly follow AI outcomes without a deeper examination of how the algorithmic decision is reached, hiring outcomes may be not only ridiculous but also discriminatory. Risks of AI-Reliant Hiring Some employers have enthusiastically embraced AI as a way to reduce costs and replace human bias in the recruitment process. Human recruiters do not have a great track record; for example, in France, discrimination in recruitment has posed such a serious problem that the government submits false work biographies with ethnic names to identify and punish employers that unreasonably reject qualified ethnic applicants. Unfortunately, AI is modeled on human thinking, so it may amplify our own prejudices and errant conclusions while giving the appearance of providing a fair and clean process. AI typically learns inductively by training on examples and historical data. Factors such as exclusion of certain groups from educational or career opportunities has often shaped this data, so AI’s decisions may amplify this past prejudice. For instance, Amazon experimented with mechanized recruitment in 2014, but abandoned these efforts prior to implementation after the AI tool selected a predominantly male workforce. The AI learned by analyzing patterns in resumes submitted to the company over the last 10 years. Since over this period