AI   Ethics

AI Can Reject YOU!

Written by Ahmad Alobaid Updated 20 July 2024 5—minute read

AI can literally reject you and there is nothing that you can do about it. And yes, it is going to affect you more than you think.

Content

  1. Is AI rejecting you?
  2. Why it is doing so?
  3. How to protect yourself?
  4. How organizations might do it?

As the adoption of Artificial Intelligence (AI) increased and its benefits took the companies and organization by a storm, some started using it without fully understanding what was going on (or basically ignoring their inner voice). This is not about having AI robots knocking on the door terminator-style. This is something actually happening, which snuck into the day-to-day activities.


Is AI rejecting you?

Have you once gone to a bank to get a loan? They do a risk assessment before making a decision. Basically, they try to predict whether you will be able to pay the loan back plus the interest. Have you been to a bank to get a credit card, and they asked for a deposit? Most probably because they do not think that you will be able to pay it back.


Have you gone to get insurance, and the price was ridiculously high? Or worse, they even refused to provide you with one? Just because you were born with some unfortunate health condition? More about that here


How about applying for a job and never getting a callback even though you are the perfect person for the position? How about crushing the interview and doing really well, but they decided to reject you? Maybe they would hire you if the AI system didn’t flag you as “not recommended”. This might not have anything to do with you being a good or a bad candidate. You might be interested in what happened when Amazon used AI to help them in the hiring process by vetting resumes.


Amazon used an AI hiring system, but it turned out to be problematic; it was “sexist”, as described by the BBC. It preferred male candidates to female candidates and would penalize resumes that mention women’s clubs. There is a scientific explanation for that, and it is easy to understand. The AI system was most likely using historical data to learn and optimize for the best results. Below, we explain how that can happen without actually meaning to be sexist (or biased).

Why it is doing so?

Let us imagine that we provided the AI system with two piles of resumes: one with the good ones and another with the bad ones. The AI system would “study” those resumes and develop theories or ideas about what constitutes a good resume and what makes a resume bad (or unfit for the position).


It is important to emphasize that whatever rules or knowledge the AI “learned” does not necessarily have to make sense. For example, it may “learn” that resumes that use Arial font are better (more fit to hire) than resumes written in Times New Roman (both are used in the resume template by MIT). We can agree that using one or the other has no relation to whether the resume is good or bad.


Having a large amount of data might solve part of this problem. In the same example, AI “learned” that Arial resumes are better than Times New Roman’s because the pile of accepted resumes has resumes written mostly in Arial, while the rejected pile of resumes is mostly written in Times New Roman. It can also learn some more logical attributes, such as relevant experience, needed skills, (required) education, fit for the position, etc.


But if hiring managers prefer graduates from Ivy League universities [1,2,3,4 ], people from certain neighborhoods [5, 6 ] , background or race [7, 8 ] , then the AI system would pick up on that. This does not mean these hiring managers intended to be biased; it can be unconscious bias. Whether biases backed by analytics are ethical or not is another question related to ethics.


There are ways to reduce bias such as standardize interviews and objectively defining culture fit. But for an AI system, it does not care about the implemented measures to combate bias, it only react to the data it is being fed. If the implemented measures reduced the bias, hence the data will have less bias, and the AI will inherently be less biased. There are also other techniques to compate bias that can be implemented as well.

There are ways to reduce bias, such as standardizing interviews and objectively defining culture fit. However, an AI system does not care about the measures implemented to combat bias; it only reacts to the data being fed. If the implemented measures reduce the bias, the data will have less bias, and the AI will inherently be less biased. There are also other techniques to combat bias that can be implemented.


How to protect yourself?

Awareness. According to a survey done by paw research, around 44% of people think they do not regularly interact with AI. Being aware of the employment of such systems and how they are being used might help us eliminate or reduce the possible negative outcomes. There is no one easy magical thing that people can do to protect themselves against AI as individuals (at least, practically).


However, awareness can be helpful on a case-by-case basis. People can change their behavior to maximize the likelihood of a positive outcome. So, if we are talking about the hiring process, people can do certain things to be more “hireable” for an AI system. Using certain (or the “correct”) keywords might improve your chances. Some people also tried gaming such systems by using a technique called keyword stuffing, which is adding keywords to the resume to trick the system into ranking your resume higher. LinkedIn and others claim that it also has a negative effect. Regardless, being aware of such techniques and how AI reacts to them can empower us to make the best out of this situation.


Researchers in the AI domain can also develop techniques to help protect people’s privacy. Researchers from Australia and the United States developed a way to trick image recognition systems called TnT Attacks. You can see a video of how they tricked the AI system by showing it a flower which the recognition system detected as Barak Obama. A demo can be found here. An Italian fashion startup called Capable sells clothes with prints to confuse image recognition software, as reported by CNN. Researchers at McAfee tricked Tesla into speeding up using tape here. However, people must be aware that some actions may be unethical or even illegal.

How organizations might do it?

  1. Technical Bias Mitigation Techniques. There are certain measures that can be used by organisations to reduce the negative effects of AI use. There are technical measures that can be employed to reduce the possible bias (e.g., resampling, reweighting).
  2. Transparent Protocol. If you ask an organization that is using AI about the protocol they are using, they probably would not share it. It is important for organizations to employ a way to ask for AI decisions to be reviewed by humans. However, it might be hard to make it work in practice, as employees can also insist on always picking the same decision suggested by the AI system. However, as a start, having a transparent protocol on how such cases will be handled might mitigate the negative outcomes. You can also check how the EU has different requirements depending on the risk, such as the EU AI Act and the EU Regulatory Framework on AI.
  3. Review Cycle. Having humans in the loop is crucial. Continuously reviewing what is going on and being aware of the different kinds of biases, especially the ones committed by humans, can shed light on what to look for in these reviews.

Share this post!