Hungary, Latvia and Greece test AI lie-detector to screen visitors

0a1a-4
0a1a-4

Trials are underway of an EU-funded scheme where AI lie-detector systems will be used to scan potentially dodgy travelers coming from outside the bloc. Too Orwellian? Or just the latest step towards smoother travel?

Commencing from November 1, the iBorderCtrl system will be in place at four border crossing points in Hungary, Latvia and Greece with countries outside EU. It aims to facilitate faster border crossings for travelers while weeding out potential criminals or illegal crossings.

Developed with โ‚ฌ5 million in EU funding from partners across Europe, the pilot project will be operated by border agents in each of the trial countries and led by the Hungarian National Police.

Those using the system will first have to upload certain documents like passports, along with an online application form, before being assessed by the virtual, retina-scanning border agent.

The traveler will simply stare into a camera and answer the questions one would expect a diligent human border agent to ask, according to New Scientist.

โ€œWhatโ€™s in your suitcase?โ€ and โ€œIf you open the suitcase and show me what is inside, will it confirm that your answers were true?โ€

But unlike a human border guard, the AI system is analyzing minute micro-gestures in the travelerโ€™s facial expression, searching for any signs that they might be telling a lie.

If satisfied with the crosserโ€™s honest intentions, the iBorderCtrl will reward them with a QR code that allows them safe passage into the EU.

Unsatisfied however, and travelers will have to go through additional biometric screening such as having fingerprints taken, facial matching, or palm vein reading. A final assessment is then made by a human agent.

Like all AI technologies in their infancy, the system is still highly experimental and with a current success rate of 76 percent, it wonโ€™t be actually preventing anyone from crossing the border during its six month trial. But developers of the system are โ€œquite confidentโ€ that accuracy can be boosted to 85 percent with the fresh data.

However, greater concern comes from civil liberties groups who have previously warned about the gross inaccuracies found in systems based on machine-learning, especially ones that use facial recognition software.

In July, the head of Londonโ€™s Metropolitan Police stood by trials of automated facial recognition (AFR) technology in parts of the city, despite reports that the AFR system had a 98 percent false positive rate, resulting in only two accurate matches.

The system had been labelled an โ€œOrwellian surveillance tool,โ€ by civil liberties group, Big Brother Watch.

About the author

Avatar of Chief Assignment Editor

Chief Assignment Editor

Chief Assignment editor is Oleg Siziakov

Share to...