'Alarming' misuse of AI to spy on activists, journalists 'under guise of preventing terrorism': UN expert
UN expert Fionnuala Ní Aoláin says, 'The unregulated transfer of high-risk technologies to States engaging in systematic human rights violations must end'
A United Nations expert warned about an "alarming" trend of "using security rhetoric" to justify "intrusive and high-risk technologies," including artificial intelligence, to spy on social rights activists and journalists.
U.N. expert Fionnuala Ní Aoláin called for a moratorium on AI development, among other advanced technologies like drones, until "adequate safeguards are in place," according to a March 2023 report that was presented to the Human Rights Council.
"Exceptional justifications for the use of surveillance technologies in human rights 'lite' counter-terrorism often turn into mundane regular use," Ní Aoláin said in a statement after the report's release.
Without meaningful oversight, she argued, countries and private actors can use AI-power tech with impunity "under the guise of preventing terrorism."
AI PUBLIC SAFETY INVESTMENT TO GROW TO $71B BY 2030 TO ‘PREDICT CRIME, NATURAL DISASTERS’: REPORT
"Abusive practices are hardwired into counter-terrorism and countering violent extremism," said Ní Aoláin, a University of Minnesota professor and a U.N. Human Rights Council-appointed special rapporteur.
Creating AI guardrails and safeguards is a daunting task that the U.S., like many other governments around the world, is trying to tackle, but it is an issue that many experts argued is unprecedented.
WHO IS WATCHING YOU? AI CAN STALK UNSUSPECTING VICTIMS WITH ‘EASE AND PRECISION’: EXPERTS
Generative AI has the potential to create a utopia, or the power to plunge a country into a dystopia, experts have claimed.
"AI is one of the more complex issues we have ever tried to regulate," Kevin Baragona, founder of DeepAI.org, told Fox News Digital in a previous interview. "Based on current governments' struggle to regulate simpler issues, it's looking hard to be optimistic we'll get sensible regulation."
WATCH Fionnuala Ní Aoláin Address UN Human Rights Council
However, banning it altogether, as Italy originally attempted to do, would set a nation back for the next century, Baragona said.
"In the absence of regulation, the cost to human rights can only increase with no end in sight," Ní Aoláin said.
AI-ASSISTED FRAUD SCHEMES COULD COST TAXPAYERS $1 TRILLION IN JUST 1 YEAR, EXPERT SAYS
AI was among a handful of "high-risk technologies" that she discussed. The topic was broken out as its own subsection in the 139-page report.
"AI has the properties of a general-purpose technology, meaning that it will open up wide-ranging opportunities for application," she wrote in her report.
WATCH EXAMPLES OF HOW AI-ASSISTED SCAMS CAN WORK
The technology is already being implemented in social, economic, political and military actions, and is integrated into law enforcement, national security, criminal justice and border management systems.
Several cities across the country tested various applications of AI in pilot programs.
WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE
At the heart of AI are algorithms that can create profiles of people and predict likely future movements by utilizing vast amounts of data – including historic, criminal justice, travel and communications, social media and health info.
It can also identify places as "likely sites of increased criminal or terrorist activity" and flag individuals as alleged suspects and future re-offenders, according to Ní Aoláin's report.
"The privacy and human rights implications of this kind of data collection and predictive activity are profound for both derogable and non-derogable rights," she said.
CLICK HERE TO GET THE FOX NEWS APP
"The Special Rapporteur highlights her profound disquiet at AI assessments being used to trigger State action in counter-terrorism contexts, from searching, questioning, arrest, prosecution and administrative measures to deeper, more intrusive surveillance.
"AI assessments alone should not be the basis for reasonable suspicion given its inherently probabilistic nature."