Robot Interviews AI and the world of work World Privacy Forum

Robot Interviews AI and the world of work World Privacy Forum

Robot Interviews AI and the world of work World Privacy Forum Skip to Content Javascript must be enabled for the correct page display Home Connect With Us: twitter Vimeo email Main Navigation Hot Topics

Robot Interviews AI and the world of work

Read the article in PDF
6 May 2019
Pam Dixon AI will alter the world of work, both in workplaces and in workplace information ecosystems. Labor experts have a term for negotiating the near and long-term effects of AI: “A fair and just transition.”[1] These and other experts have already begun documenting the emerging changes in the workplace resulting from AI deployments.[2] Some consequences are positive, some are deleterious. For example, there is a risk that automation could intensify income inequalities and displace a meaningful percentage of the global workforce.[3] There is also a possibility that certain types of work could increase in value, such as those working in high-skill medical fields, among others. No one has a crystal ball with a perfect view of what the future may hold. That’s why a key focus now for labor stakeholders and those interested in AI and the world of work is to research and benchmark what is happening. The goal is to provide ongoing, fact-based documentation and analysis that will inform all stakeholders regarding effective mitigations and policies that will ensure a just transition into a new world, one in which AI is ubiquitous. Recently, a Swedish company made headlines announcing a physical job interview AI robot designed to avoid human biases.[4] This is not the sole example of AI activity in the hiring process — companies have already been using AI software to make phone calls to candidates and interview them for at least a year or more.[5] The idea of a robot interview sounds positive at a surface level: no bias — great! Robot interviews may indeed, on paper, reduce certain theoretical types of bias.[6] But the view from the trenches is what is important here. If a robot interview does not provide a platform for a wholistic interview that captures all of an applicant’s skills from blunt to subtle, then a different kind of bias could be put in place. The original bias the robot was intended to solve could simply be transferred to another area. For example, an AI robot interview could create other risks, such as an assessment that creates preferential bias toward those with good language proficiency, and potentially a negative bias against candidates who have a disability impacting aspects of speech. Those candidates with many other positive skills that are too subtle for an AI robot interview — such as creative skills, or nuanced interpersonal skills — could find that their best qualities are simply not taken into account. Simply solving for one piece of the hiring puzzle (initial hiring dialogue) without solving for the larger picture (wholistic candidate assessment ) could introduce more problems than it solves. Recent research and thinking points to the idea of assessing candidates more wholistically, not less.[7] AI in the hiring process is but one of many issue areas that AI in the world of work brings forward. In looking at the influences of AI specifically on privacy and the world of work, it is clear that much more work needs to be done to gather facts and create benchmarks specific to privacy, human autonomy, and dignity impacts of AI in the workplace, as well as documenting relevant case studies.[8] This is a significant gap in research. It is important to catch up to the work being done on fair and just transition regarding employability, gender disparity, salary, etc. and to create a complementary body of knowledge that facilitates accurate policy responses regarding a fair and just transition that also enhances privacy, human autonomy, and dignity. It’s not that automated hiring systems are automatically bad, or bad for privacy. We simply do not have nearly enough independent documentation of what the effects are regarding privacy, and many other potential issues. Simply put, we do not yet know enough about what the privacy impact is or is not. Seeing the privacy issues clearly requires looking into many adjacencies. Here are some immediate questions: Are certain jobs more likely to use AI in hiring or other human resource activities or decisions? If so, which? What types of AI systems are being used for workplace purposes? What are the uses? What is the role of automated AI in hiring processes, and do these processes influence or change privacy or other fairness considerations? If so, how? (Data collection, analysis, uses, retention?) What are the standards for privacy impact analysis on AI in the workplace and in hiring? In the US, is the Department of Labor monitoring developments and conducting studies? Is the Equal Employment Opportunity Commission monitoring developments? In other jurisdictions, are the relevant government agencies monitoring developments and conducting studies? Additional questions specific to AI in the hiring process: To what extent are AI systems determining or influencing responses in eligibility situations (employment)? What specific AI systems are in use now in the hiring process, either in-house or outsourced?[9] Are job applicants aware of AI systems in the hiring process? What is business sector responsibility to disclose AI systems used in the hiring process and to ensure that impacts are understood and mitigated? What is the privacy impact assessment of AI systems already in place? In what contexts have hiring robots already been deployed, and for how long? Is there a discrepancy in application of AI hiring technologies, that is, do lower wage applicants get AI interviews, and do higher wage applicants not get them? Have any of the AI robot users or developers instituted longitudinal studies regarding impact assessment? Are AI robot interviews or other AI-influenced activities in the hiring lifecycle compatible with fair and equal treatment of those who are sight or hearing impaired, or have disabilities? It is just the beginning of experimentation, implementation, and use of AI in hiring and the workplace. As these uses expand, knowledge of how to navigate this new world needs to keep pace. Let the arrival of robot interviewers in hiring situations be a clarion call to catch up, because there will be multiple consequences. We just don’t have a clear picture of what these are yet. Neither hype about how great the systems are nor rhetoric about how damaging the systems are regarding privacy effects will be helpful. We need meaningful factual documentation of actual effects and changes in the trenches, and then all stakeholders can begin understanding more about what is happening and make appropriate decisions.[10] There’s a lot at stake here, and it’s important to get effective policies in place early. To do that, it is essential to do the hard work of ongoing documentation of the facts and learning what is happening as it is happening. Pam Dixon, Executive Director World Privacy Forum Notes: 1 There is significant policy momentum on digital transformations and the future of work at the EU level. An excellent study has been authored by the OECD representative of the Trade Union Advisory Committee (TUAC), who was also involved with the OECD AI Guidelines. The study includes an analysis of impacts of digitization and AI on work, and includes case studies from seven EU jurisdictions as well as recommendations. The bibliography of the report cites additional helpful case studies. See: Byhovskaya, A. (2018) Overview of the national strategies on work 4.0: a coherent analysis of the role of the social partners. Brussels: European Economic and Social Committee. Available at: https://www.eesc.europa.eu/en/our-work/publications-other-work/publications/overview-national-strategies-work-40-coherent-analysis-role-social-partners-study 2 Justine Brown et al, Workforce of the Future: The competing forces shaping 2030, PWC, Available at: https://www.pwc.com/gx/en/services/people-organisation/workforce-of-the-future/workforce-of-the-future-the-competing-forces-shaping-2030-pwc.pdf. 3 James Manyika and Kevin Sneader, AI, automation, and the future of work: ten things to solve for, McKinsey Global Group, Executive Briefing, June 2018. Available at: https://www.mckinsey.com/featured-insights/future-of-work/ai-automation-and-the-future-of-work-ten-things-to-solve-for#part3. 4 Maddy Savage, Meet Tengai, the job interview robot who won’t judge you, BBC News, March 12, 2019. Available at: https://www.bbc.com/news/business-47442953. 5 Bill Goodwin, PepsiCo hires robots to interview job candidates, Computer Weekly, April 12, 2018. Available at: https://www.computerweekly.com/news/252438788/PepsiCo-hires-robots-to-interview-job-candidates. 6 Some scholars have made cases for the use of AI tools in hiring to reduce discrimination. See Kimberly Houser, Can AI Solve the Diversity Problem in the Tech Industry? Mitigating Noise and Bias in Employment Decision-Making, February 28, 2019. 22 Stanford Technology Law Review (Forthcoming). Available at SSRN: https://ssrn.com/abstract=3344751. 7 Interview with Srikanth Karra, chief human resource officer of Mphasis, The future of jobs in the world of AI and robotics, Knowledge@Wharton, University of Pennsylvania, March 1, 2018. Available at: https://knowledge.wharton.upenn.edu/article/future-jobs-world-ai-robotics/. See also the discussion of algorithms and classification bias in hiring, Pauline T. Kim, Data-Driven Discrimination at Work, William & Mary Law Review, Vol 58, Issue 3, Article 4. Available at: https://scholarship.law.wm.edu/cgi/viewcontent.cgi?article=3680&context=wmlr. 8 The workplace of the future: As artificial intelligence pushes beyond the tech industry, work could become fairer—or more oppressive, Special Edition, The Economist, March 28, 2018. Available at: https://www.economist.com/leaders/2018/03/28/the-workplace-of-the-future 9 Many such systems already exist. See, for example, Pymetrics AI Hiring Solution, https://www.pymetrics.com/employers/. Some of these systems could potentially boost privacy if implemented correctly, but more work is needed to fully understand and assess quality and impacts. 10 Pauline T. Kim, Data-Driven Discrimination at Work, William & Mary Law Review, Vol 58, Issue 3, Article 4. Available at: https://scholarship.law.wm.edu/cgi/viewcontent.cgi?article=3680&context=wmlr.

Related Documents

OECD Draft Recommendation on AI – WPF work process
Digital Identity Ecosystems  Digital identity is a lynchpin for AI systems particularly in the employment sphere  
Posted May 6, 2019 in AI, Data Ethics, Do No Harm, Workplace Privacy Tags: AI, Fair and Just Transition, World of Work Next »Still a One-Way Mirror Society – why has progress in retail biometrics and privacy stalled? « PreviousWPF Executive Director Pam Dixon to testify before US Senate on privacy, predictive analytics, and data brokers WPF updates and news CALENDAR EVENTS

WHO Constituency Meeting WPF co-chair

6 October 2022, Virtual

OECD Roundtable WPF expert member and participant Cross-Border Cooperation in the Enforcement of Laws Protecting Privacy

4 October 2022, Paris, France and virtual

OECD Committee on Digital and Economic Policy fall meeting WPF participant

27-28 September 2022, Paris, France and virtual more Recent TweetsWorld Privacy Forum@privacyforum·7 OctExecutive Order On Enhancing Safeguards For United States Signals Intelligence Activities The White House https://www.whitehouse.gov/briefing-room/presidential-actions/2022/10/07/executive-order-on-enhancing-safeguards-for-united-states-signals-intelligence-activities/Reply on Twitter 1578431679592427526Retweet on Twitter 1578431679592427526Like on Twitter 1578431679592427526TOP REPORTS National IDs Around the World — Interactive map About this Data Visualization: This interactive map displays the presence... Report: From the Filing Cabinet to the Cloud: Updating the Privacy Act of 1974 This comprehensive report and proposed bill text is focused on the Privacy Act of 1974, an important and early Federal privacy law that applies to the government sector and some contractors. The Privacy Act was written for the 1970s information era -- an era that was characterized by the use of mainframe computers and filing cabinets. Today's digital information era looks much different than the '70s: smart phones are smarter than the old mainframes, and documents are now routinely digitized and stored and perhaps even analyzed in the cloud, among many other changes. The report focuses on why the Privacy Act needs an update that will bring it into this century, and how that could look and work. This work was written by Robert Gellman, and informed by a two-year multi-stakeholder process. COVID-19 and HIPAA: HHS’s Troubled Approach to Waiving Privacy and Security Rules for the Pandemic The COVID-19 pandemic strained the U.S. health ecosystem in numerous ways, including putting pressure on the HIPAA privacy and security rules. The Department of Health and Human Services adjusted the privacy and security rules for the pandemic through the use of statutory and administrative HIPAA waivers. While some of the adjustments are appropriate for the emergency circumstances, there are also some meaningful and potentially unwelcome privacy and security consequences. At an appropriate time, the use of HIPAA waivers as a response to health care emergencies needs a thorough review. This report sets out the facts, identifies the issues, and proposes a roadmap for change.
Share:
0 comments

Comments (0)

Leave a Comment

Minimum 10 characters required

* All fields are required. Comments are moderated before appearing.

No comments yet. Be the first to comment!

Robot Interviews AI and the world of work World Privacy Forum | Trend Now | Trend Now