Buy me a coffeeBuy me a coffee

Breaking News

CREEPY: Border Control Points In The EU Install AI Lie Detectors

At some ports in the European Union, artificial intelligence-powered systems called iBorderCtrl are being installed to not only speed up the process for travelers, but also to determine if they’re lying.

According to New Scientist, a six-month trial will take place at four separate border crossing points in Greece, Hungary and Latvia. The trials will start with lab testing to familiarize border guards with the system, followed by scenarios and tests in realistic conditions along the borders.

During pre-screening, users upload their passport, visa, and proof of funds, then are subjected to a line of questioning asked by a computer-generated guard to a webcam.

The system analyzes the user’s microexpressions to determine whether or not they are lying, and then be flagged as either low or high risk.

Trending: WATCH: Miles Of Razor Wire, 5,200 Soldiers Deployed To Southwest Border

Questions from the AI guard include, “What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?”

For travelers who pass the test, they’ll receive a QR code that will let them pass through, but if there’s additional concern, their biometric data will be taken, and handed over to a human agent who will assess the case.

“We’re employing existing and proven — as well as novel ones — to empower border agents to increase the accuracy and efficiency of border checks,” project coordinator George Boultadakis told the European Commission.

“iBorderCtrl’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.”

Accuracy is an obvious issue with iBorderCtrl systems, with it still in its early stages. One team member told New Scientist that early testing provided a 76% rate, but believe it could be raised to 85%.

Using artificial intelligence is already a huge problem at borders with HART, which stands for Harm Assessment Risk Tool, which is used by British police to help officers assess the risk of suspects re-offending. The system has issues with replicating racial biases, much like humans do.

When the data is systematically biased, outcomes may be discriminatory because learning models bring into the foreground assumptions that have been tacitly made by humans.

Even with the system hitting at around 88% accuracy , a  “subset of the population can still have a much higher chance of being misclassified,” Frederike Kaltheuner, policy officer for Privacy International, told Mashable.

For example, if minorities are more likely to be put in the wrong basket, a system that is accurate on paper may still be racially biased.

“It’s important to stress that accuracy and fairness are not necessarily the same thing,” Kaltheuner said.

Follow Haley Kennington on Twitter

Let's block ads! (Why?)


Support Big League Politics Big League Politics

Web Market Power providing the latest marketing tips, news and tricks throughout the industry.

 


Try Caviar Food Delivery



Sourced by the online web marketing guys. Web Marketing Experts that know how to drive business uniquely using creative marketing methods, and self-sufficient social media strategies.

via Big League Politics

No comments