The Law and Ethics of A.I.-driven biomedical innovation
Code 550NN
Credits 6
Learning outcomes
This course addresses some fundamental regulatory questions concerning A.I.-driven biomedicine. As the convergence of ‘omics’ science and advanced data analyses rises a tide of biomedical innovation, it is essential for future professionals in this field to understand its social implications and keep abreast of the relevant legal and ethical frameworks.
Participants will earn an understanding of the legal and ethical implications of the process of development of A.I.-based biotechnological applications from lab to market.
After a comparative overview of the regulatory strategies in the European Union and the United States, we will address some of the most relevant problems raised by the application of A.I. to biomedicine, such as the tension between informed consent procedures and ‘black-box’ algorithms; how data safety and transparency requirements vary, depending on the tasks and the risks; the potential for discrimination and unfairness inherent in some machine-learning applications; the risks for patients privacy, well beyond the patient-doctor relationship.
We will then analyze the response of legal systems to these concerns, with a focus on the European Union. Participants will learn how to identify and minimize legal liabilities; comply with relevant regulations concerning product standardizations and certification; embed fundamental rights protection and fairness considerations within the development of A.I. applications; adopt risk assessment and IPRs protection strategies, also concerning data management. We will give particular attention to the more technical areas of expertise gained in other courses which this teaching complements.
Participants will earn an understanding of the legal and ethical implications of the process of development of A.I.-based biotechnological applications from lab to market.
After a comparative overview of the regulatory strategies in the European Union and the United States, we will address some of the most relevant problems raised by the application of A.I. to biomedicine, such as the tension between informed consent procedures and ‘black-box’ algorithms; how data safety and transparency requirements vary, depending on the tasks and the risks; the potential for discrimination and unfairness inherent in some machine-learning applications; the risks for patients privacy, well beyond the patient-doctor relationship.
We will then analyze the response of legal systems to these concerns, with a focus on the European Union. Participants will learn how to identify and minimize legal liabilities; comply with relevant regulations concerning product standardizations and certification; embed fundamental rights protection and fairness considerations within the development of A.I. applications; adopt risk assessment and IPRs protection strategies, also concerning data management. We will give particular attention to the more technical areas of expertise gained in other courses which this teaching complements.