The Institute of Information Security held its first press conference. Lin Chunyin (from left), Deputy Director of the Information Security Administration, Wang Chengming, Director of the Digital Government Department of the Ministry of Digital Development, and He Quande, Director of Information Security attended.

(Photo by reporter Xu Ziling)

[Reporter Xu Ziling/Taipei Report] Generative AI became popular, but it also raised concerns about personal information and information security.

He Quande, Director of Information Security, said today (29th) that the Institute of Information Security will refer to EU rules on AI and formulate testing standards and tools for AI technology to see whether the algorithm is open and transparent, whether it protects privacy, and whether it is objective. out of the oven.

The Institute of Information Security was established in January, and held its first press conference today. Director of Information Security He Quande, Deputy Director of the Information Security Administration Lin Chunyin, and Director of the Digital Government Department of the Ministry of Digital Development Wang Chengming attended.

Please read on...

The media is concerned about the development of AI in my country. He Quande believes that AI has two major issues: whether AI is safe, and how to use AI to make information security better.

Regarding the safety of AI, the Ministry of Digital Technology has made forward-looking deployments. The Information Security Institute will cooperate with the Industrial Technology Research Institute, refer to some EU rules on AI, and establish a detection mechanism for AI technology to minimize the risk of AI.

He Quande said that my country wants to develop its own AI, and whether AI can be trusted, the National Science Council has formulated the "Guidelines for the Development of Artificial Intelligence Research and Development", and there are also some international standards, such as whether it can be trusted, whether the data is objective, and personal information protection issues , This part of the relevant detection methods will be studied by the Information Security Institute.

As for what to test?

He Quande pointed out that accuracy, reliability, objectivity, security protection, explainability, etc., mainly ensure the safety and trustworthiness of AI, because AI will be widely used in various fields in the future, and the Information Security Institute must formulate testing standards and tools , so that enterprises can check to see if the system they use is good or not.

However, He Quande emphasized that the Institute of Information Security is not a unit of public power, but research. Whether the country wants to formulate standards for AI products must be determined by the competent authority. The Institute of Information Security first plans the system and hopes to release it before the end of the year.

Lin Yingda, deputy director of the Information Security Institute, added that the Information Security Institute and the Industrial Technology Research Institute will work together to conduct long-term testing of various products that claim information security functions. Initially, ChatGPT was selected for testing to see if there is a good defense.

He said that in the second half of the year, the overseas ChatGPT will be done first, and the Taiwan version of the generative AI will be launched at the end of the year, and the test may not be available until the first or second season of next year.

Lin Yingda described that ChatGPT is very similar to a child who learns things indiscriminately. He has to see what is wrong in it, otherwise he will keep spitting out wrong information, and he cannot be allowed to study indiscriminately. He should have an anti-fraud mechanism and find out mistakes during training.

Grasp the pulse of the economy with one hand I subscribe to Free Finance Youtube channel

Already added friends, thank you

Welcome to 【Free Finance】

feel good

Already liked it, thank you.

related news