Data protection meets AI: New DSK guideline specifies requirements
The handling of personal data in AI systems presents companies and developers with increasingly complex challenges. With its guidance published in June 2025, the Conference of Independent Data Protection Supervisory Authorities (DSK) now provides specific recommendations for the data protection-compliant development and operation of AI systems.
Review of the DSK guidance on AI from 6 May 2024
Back in May 2024, the DSK published its first guidance on the data protection-compliant use of AI applications, which was primarily aimed at data controllers. It contains basic guidelines on the design, implementation and use of AI systems. Although the guidance also contained initial information for developers, manufacturers and providers of AI systems, there were no specific recommendations tailored to these target groups.
Data protection, technical and organisational requirements for AI systems
The guidance published by the DSK in June 2025 now addresses this gap and is primarily aimed at developers and manufacturers of AI systems. It defines data protection, technical and organisational requirements along the four lifecycles of an AI system - design, development, introduction and operation - and assigns these to the respective warranty objectives.
The following presentation is only an excerpt of the contents of the guidance and does not claim to be exhaustive. It is for general information purposes only and does not replace legal advice.
(1) In the design phase of an AI system, the fundamental course is set - for example, for the system architecture, choice of model and technical infrastructure. The selection and collection of data for training, validation and testing of the AI system is particularly relevant.
A legal basis is required for the processing of training data. It must be ensured that the origin of the data is not unlawful (e.g. no illegal data sources).
The origin of the data must be transparently documented and traceable. For early verifiability, the DSK also recommends documenting the following, among other things: Purpose and legal basis, necessity of processing, objective and function of the AI system, AI system architecture.
Measures for confidentiality, integrity, intervenability and separation of data sets must already be taken at this stage in order to avoid unwanted profiling. In addition, the purpose of the AI system and the scope of data required for this purpose must be defined in the interests of data minimisation.
Finally, precautions must be taken to safeguard the rights of data subjects and to implement official orders in connection with training data and models.
(2) The development phase focuses on data preparation, training and validation of the AI systems.
The DSK recommends documenting the AI algorithms used for AI models. In addition, target values for AI models should be defined for validation purposes and test procedures should be developed to ensure the purpose limitation of AI models.
In order to safeguard the data protection guarantee objectives, only data that is necessary for the respective processing purpose and specific processing step may be used.
In addition, comprehensive documentation of the training processes, data sources and model decisions is required in order to fulfil transparency and accountability obligations.
The traceability and contestability of personal data and results as well as intervention options must be guaranteed.
Protective measures must be taken against unintentional or malicious changes. The integrity and confidentiality of an AI system encompasses the inalterability and correctness of the processed data and the generated results.
(3) In the introductory phase, the AI system is transferred to the productive environment and made available to users. Particular attention must be paid to data protection settings "data protection by default". The AI system must be pre-set so that it only processes the personal data required for the respective purpose.
Central usage decisions - for example on functionality, possibilities for human intervention and data subject rights - must be transparently documented and made available.
(4) The quality of the output of AI systems should be continuously evaluated as part of operation and monitoring. It must be ensured that the data protection measures taken in advance are also complied with during operation, in particular purpose limitation, data minimisation and the effectiveness of technical and organisational measures.
AI model parameters and processing steps must be documented. In the event of changes and updates, the AI system must be repeatedly checked and validated. If more personal data is processed than necessary, corrective measures must be taken to minimise data.
The AI system must safeguard the rights of data subjects. In particular, it must be possible to completely remove data in the event of deletion. Under certain circumstances, the AI system must be retrained (machine unlearning).
In addition, compliance with the quality requirements from the development phase must be regularly checked and changes in the behaviour of the AI systems must be identified and evaluated.
The integrity and confidentiality of training data and AI models must be guaranteed - especially in the case of publicly accessible systems.
Conclusion
The DSK's current guidance represents a welcome step in the right direction to promote the data protection-compliant use and operation of AI systems. With its recommendations and checklists, it provides a helpful basis for the development, production and use of AI systems. In particular, the information on transparency and documentation can help to ensure that data protection requirements are taken into account and implemented at an early stage and in a structured manner.
Source: DSK guidance on recommended technical and organisational measures for the development and operation of AI systems, version 1.0, as at: June 2025

Subscribe to our GvW Newsletter here - and we will keep you informed about the latest legal developments!





