The Future of Privacy Forum recently made a significant contribution to the regulation of biometric data in immersive technologies with the publication of its Risk Framework for Body-Related Data in Immersive Technologies report. The framework provides best practices for organisations and businesses to effectively handle, manage and transfer body-related data across different entities.
The framework was co-authored by Jameson Spivack, Senior Policy Analyst, Immersive Technologies, and Daniel Berrick, Policy Counsel. This report serves as a comprehensive resource for organisations utilizing immersive technologies, which often require large amounts of biometric data.
The FPF’s Risk Framework for Body-Related Data in Immersive Technologies report offers a four-stage approach for handling biometric data. It begins with creating data maps and documenting data use practices, followed by an analysis of applicable legal frameworks to ensure compliance with existing privacy laws. The report emphasizes the importance of identifying and assessing risks to individuals, communities, and societies, and implementing best practices to minimize privacy risks and ensure the fair and ethical use of data.
The FPF’s framework is a valuable tool for organisations embarking on the use of immersive technologies, and it offers detailed recommendations for implementing best practices to handle biometric data responsibly. This includes localizing and processing data on devices and minimizing data footprints, regulating or implementing third-party management, providing meaningful notice and consent, preserving data integrity, and offering user controls. By following the guidelines outlined in the framework, organisations can align their data practices with a coherent strategy and consistently assess and minimize privacy risks.
In addition to the release of the FPF’s report, the European Union has also taken steps to regulate artificial intelligence (AI) through the AI Act, which aims to protect citizens from harmful and unethical use of AI-based solutions. The European Commission’s goal is to create a regulatory framework for AI that analyzes and classifies AI systems based on the risks they pose to users.
Furthermore, the Biden-Harris administration in the United States has implemented an executive order on the regulation of AI. This order aims to safeguard citizens worldwide from the harmful effects of AI by requiring developers of powerful AI systems to share their safety assessments with the US government.
The UK also hosted an AI Security Summit at Bletchley Park, where experts gathered to outline protections and regulations for AI. This summit brought together industry experts, executives, and organisations to collaborate on best practices and regulatory frameworks for AI safety.
The publication of the FPF’s framework and the ongoing efforts of governments and organisations to regulate AI and immersive technologies reflect a growing awareness of the importance of data privacy and ethical use of technology. It also highlights the need for collaborative efforts and cross-border alliances to develop regulatory frameworks and best practices for emerging technologies. Through these efforts, the global tech community and regulatory bodies aim to protect individuals and communities from potential risks associated with biometric data and AI.