The Office of the Privacy Commissioner for Personal Data on Tuesday issued a set of guidelines for organisations to protect data when using artificial intelligence systems.
The privacy watchdog said local institutions are becoming aware of the privacy risks that AI poses, as more and more of them start making use of such technology.
Almost half of all organisations in Hong Kong are expected to use AI this year – up from 30 percent in 2023, according to the Productivity Council.
Privacy Commissioner Ada Chung said a lot of information needs to be collected to train or operate AI systems and people's privacy will be infringed if these details are leaked.
She said the new Model Personal Data Protection Framework details steps organisations should take when procuring and using any AI system that makes use of personal details.
For example, institutions are advised to minimise the collection of personal data, conduct rigorous testing of their system, ensure data security and devise an incident response plan.
Chung said organisations should also let workers retain control when high-risk AI applications are used to help make decisions.
"If the use of the AI system involves the collection of biometric data, of course that would be a high-risk scenario. And if the AI system is to provide recommendations on the granting of credit to individuals or institutions, that would also be a high-risk scenario," she said.
"On the other hand, if the AI system is only used to provide basic information relating to the services or products of the organisation, such as information provided by an AI chatbot, the risk would be lower."
Chung added that organisations should also offer their customers adequate information on their AI systems and allow them to access, correct or delete their data.