You likely know at least a bit regarding background screening. You at least know about the general concept. You can always hire a company to do background screening on someone, and they will get back to you with a detailed report.
Often, you might do background screening on someone if you’re thinking about hiring them to work for your company. You may also do it if you’re dating them and considering taking things to the next level. You may want to know about this person’s openness and honesty, or if they’re hiding anything that you’d like to know.
However, you may not necessarily know how background screening works. We’ll talk about that in the following article, so you’ll have some idea of what goes on behind the scenes prior to someone getting that report.
If you want to understand how background screening technology works, you’ll need to know a little about a field called predictive technology. It’s fairly new, but it has grown by leaps and bounds in the past few years.
Those in the field regard predictive technology as cutting-edge. It has existed for a while, but it keeps getting better as time passes. This is not uncommon. A new tech form emerges, and it gradually becomes more streamlined and efficient.
Predictive technology, when you apply it to background screening, combines artificial intelligence and machine learning. Together, they look at information they find in a person’s background. They will look at public records, social media, and any other sources they can find.
They use the information they discover to predict someone’s future behavior. It’s a little scary to think that AI and machine learning can do that, but they can. Their predictions are also usually quite accurate.
How Is that Different from Prior Background Screening?
In the past, you might have received a report if you checked up on someone. It told you if they had a criminal record, whether they owed the IRS any money, their marital status, etc.
This is what the business called people-oriented research. You got the information. Then, you could use it to try and determine whether you should continue dating this person, whether you should hire them to be a part of your company, whether they would make a reliable babysitter, and so forth.
The AI and machine learning combination have changed all of this. Now, you do not have to guess whether a person might behave a certain way in the future because of how they acted in the past. An advanced algorithm will tell you with remarkable accuracy how someone will behave based on information that it garners from their life up to this point.
Is This a Foolproof System?
When you think about this model that the background screening industry uses more and more these days, it’s easy to think of movies like Minority Report. In it, AI predicts a murder before it occurs.
What’s potentially troubling about the concept is that just because a machine predicts that there is a high probability that something will happen, that does not necessarily mean that it will. For instance, AI and machine learning might predict that someone will probably commit a crime in the future because they committed one in the past. That is not always the case, though.
That person might learn from their mistakes and obey the law from a certain point forward. They may see the error of their ways. If they have a lengthy criminal history, though, the algorithm will likely predict that they will not change and that they will continue with this same behavior for the foreseeable future.
What This Means for Background Checks and Those They Impact
If you’re someone with a long criminal history, you should know about this technology. Say that someone does a background check on you because they’re thinking about employing you. The AI and machine learning will look at a large dataset related to you. It will find connections and compile a comprehensive background report.
When that happens, if the AI and machine learning technology tell a prospective employer that you’re likely to steal from the company or do something else of a criminal nature, that employer is probably not going to hire you. That might not seem fair, but that is the reality of where the technology is these days.
There are many different competing software packages that are on the market, and there is no telling which one a possible employer might use when they’re trying to learn about you. An individual might use a different predictive analysis software package if they’re trying to determine whether someone will make a good babysitter, for instance.
The point is that predictive analytics dominate this field right now, despite the possibility that they might make an error when predicting what someone might do. Just because there’s a chance or even a high probability that someone will do something, it does not necessarily follow that they will do it.
What Can You Do About This?
As someone seeking a job, if an employer tells you that a predictive analytic model revealed to them that you will probably not make a good employee, all you can do is plead your case to them. You can’t change what you’ve done in the past. If you committed crimes, and that’s part of the public record, you can’t very well dispute them.
What it comes down to is you arguing with an algorithm. You can feel sure that you’ll make a good employee, and you won’t engage in the same behaviors that got you in trouble in the past. However, if the employer is not willing to give you that chance, you’re out of luck.
The best you can hope for is to find a company that does not use the predictive analytic model. Unfortunately, as we’ve mentioned, this model dominates this industry right now. You might find it hard locating employment if you keep running into this problem.