AI for law and governance

Scientific Unit Director: Prof. Antonino Rotolo

Scientific and executive representatives: Dott.ssa Chiara Valentini, Dott. Giuseppe Contissa

Cirsfid - the research center on legal philosophy and legal informatics of the University of Bologna


In the legal domain, a range of AI models, standards and applications are being developed to analyse and classify documents, apply complex regulations, suggest or predict the outcome of cases, detect or anticipate illegal behaviour, evaluate legal evidence, analyse sets of legal cases and social data to detect trends and anticipate changes, govern the interaction of autonomous systems. In this context, AI methods are being inspired by, and hybridised with legal theories.

In politics, AI can support evidence-based rational decision making, as well as citizen’s involvement in political choice and facilitate political communication and the aggregation of opinions, while its misuse may have disruptive effect on democratic processes, affecting the formation of public opinion (e.g. in elections).

Research on AI for law and governance aims at addressing the deployment of AI in government and law, developing research in law and politics, and supporting the development of effective and innovative context-sensitive solutions, thus contributing to democracy and the rule of law. It also aims to analyse the legal and policy issues emerging from the deployment of AI technologies (in multiple domain such as data protection, consumer protection, competition liabilities, insurance, employment, administration, political communication), and design solutions enabling us to profit from potentially disruptive new technologies, while being consonant with legal and social values.

Research on the use of AI techniques in legal and social policy areas:

  • Computational models for AI knowledge, reasoning decision-making and adjudication
  • Methods and systems to analyse and classify legal documents or texts related to public debate
  • Methods and systems for predicting judgments or policy orientations
  • Methods and systems for the representation of knowledge through legal ontologies, markup language standards, linked open data and knowledge graphs.
  • Models and systems for the extraction of legal knowledge from legal texts using ML and NLP techniques.
  • Methods and systems for legal data analytics using legal big data.
  • Methods of adaptive visualization of texts, concepts, legal norms using also embodied cognition and legal design theories. This is the operational context of the Legal Theory and Cognitive Science Laboratory.
  • AI to support policy making
  • AI modelling in economics and politics
  • Tools for the elaboration and implementation of models for the fight against organized crime, cyber-attacks, cyber-crimes in the network such as hate speech, fake news, cyberbullying.
  • Tools to promote the dissemination of digital citizenship and combat discrimination, especially to protect vulnerable groups (e.g. minors, children, gender studies).
  • Legal and social issues of AI: data protection, consumer protection, competition law, liability, etc.
  • AI, ethics, and human rights (fairness, transparency, explicability, human control, accountability, trust)
  • AI and governance (political participation and deliberation; democratic representation; electoral systems and processes).
  • AI and alternative dispute resolution.
  • Bioethics, biobanking and big data research in health care