A smartphone displays a ChatGPT logo and a computer motherboard in an illustration on Feb. 23. Photo: Reuters

BIAS RISK: Legal intervention is needed to prevent tech companies and totalitarian states using technology to undermine democracy and culture, a researcher said

By Hsu Tzu-ling and Jonathan Chin / Staff reporter, with staff writer

Executive Yuan spokesperson Lo Ping-cheng (罗宁成) on Saturday gave a speech written by ChatGPT to highlight concerns about biases in artificial intelligence (AI) at a conference in Taipei discussing the Cabinet's draft bill on the technology.

The growing maturity of AI has made life more convenient and offers solutions to many challenges, Lo told the event hosted by the Taiwan Artificial Intelligence Development Foundation.

The government considers the development of AI a significant part of national policy, and would ensure that technology serves the needs of society while safeguarding civil rights and public safety, he said.

A draft bill for the basic laws of AI would be key to bolstering legal protection of personal data, he said, before revealing that he had used ChatGPT to write his speech.

The capability of algorithms raises troubling questions about the relationship between AI-generated content and laws concerning intellectual property and freedom of speech protections, Lo said, adding that the government is “wary and fearful” about the issue.

Last year, the Executive Yuan launched an initiative to draft laws and regulations governing private data protection and use of AI after assessing the nation to be lagging behind others regarding its technology laws, he said.

The Ministry of Digital Affairs, the National Development Council and the National Science and Technology Council have been tasked with creating new laws and regulations, he said.

Advances in AI have had a significant impact on the creation of text and graphic content, which would have broader ramifications on the propagation of ideologies, foundation chairman Ethan Tu (DU Yijin) said.

AI bias refers to the tendency of algorithms to reflect the national, cultural and ideological biases of their creators, he said.

Protecting democracy and culture from AI-empowered tech giants and totalitarian states would require legal intervention, he said.

Training an AI with Chinese-language records and data might result in the transmission of Chinese political and cultural biases to the AI's learning, which could then influence its future users, he said.

This means the development of the technology industry in Taiwan must be regulated to facilitate a local base of AI creation and research, while legal precautions should be taken to address the potential of AI to spread biases, he said.

Intellectual property, privacy protection and liability rules in cases where the use of AI results in damage to life or property are other legal areas of concern, he added.

Additional reporting by Ou Yu-hsiang

News source: TAIPEI TIMES