UPDATE: The conference will be held completely virtual, the optional conference dinner on december 1 will be physical at "Het plein" in The Hague
This conference is the follow up of the expert meeting “Making governments AI Proof”, which took place in December 2020 in The Hague. It will continue the conversation on how governmental institutions can better organize data driven policymaking including the use of Artificial Intelligence for the public good. The following will be addressed specifically:
How can we gain trust in the use of AI within Governments?
AI is often regarded as a “boogy man”. We therefore need to enhance awareness and regulation in order to use AI in responsible ways. Not using AI seems not an option in the long run.
Some experts state that algorithms need to be certified and listed in order to build more trust. In The Netherlands and Finland, algorithm registers have been established for this purpose. In order to use these registers properly we need to agree on how we wish to scrutinize or supervise AI, and make sure that transparency is not just for the sake of transparency, but is actionable for various stakeholders. Algorithms are not right or wrong in itself. It’s the use of it that can lead to mistakes.
A point of attention is how to turn the stigma of the public towards AI around, leading to a more positive focus on the improvements AI may foster and is already making. It received far less attention when the Netherlands Court of Audit stated that no forms of “uncontrolled AI” had been founded, as opposed to the more regular blaming of algorithms to cover political or governmental misconduct. Sometimes, algorithms might deserve to be addressed within politics or by the media as the cause for success.
How to use AI as an integral and secure part of government innovation?
How to stimulate innovation responsibly? Industry best practices are key, as they can demonstrate which innovation will work or will not work in a sector where the turmoil is generally lower than in the public sector.
Although we may have the risk that the private sector will outclass the public sector, public-private cooperation is essential for innovation. This is not without complications. The governments need to be fully transparent on the related conditions; private ownership of data may, for example, may become a hurdle too high to overcome in the public sector.
Another point of attention is that we should leave ample space for experiments in order to find out how we can work with algorithms e.g. in order to enhance the prediction of specific interventions or to reduce costs. We should enable standards and create a safe environment in which these experiments may take place for the purpose of learning.
What can the government do internally to improve data driven policymaking?
Standards are important in order to be able to cooperate not only technologically but also on the process level. In all these processes we need standards in order to bridge data-use between public agencies/departments, between governmental levels and – in some cases - between public- and private parties. For creating transparent AI, we need not only to build trust between parties and on conditions but some central management and control is essential as well.
In general we may state that we need data engineers and data scientists to build AI, we need academia to understand the underlying principles and methods and we need policymakers to have the capacities to deal with both the conditions and the consequences for the people. This conference will discuss how to make use of each other’s expertise in – particularly – the earliest phase of development, allow for piloting and a safe environment for learning, and share best practices for generating benevolent AI that is instrumental for improving the public good.
We are looking forward to virtually welcoming you in The Hague!Terms & costs
Professor of Public Administration, Leiden Universityread more
Chief Executive Officer of the Data Coalition and President of the U.S. Data Foundationread more
Chair of the Trustworty Artificial Intelligence Group, NOREAread more
Vice-President of the Netherlands Court of Auditread more
Senior Director, Rule of Law, Responsible Tech and European Government Affairs at Microsoftread more
Director Artificial Intelligence at the Ministry of Justice and Securityread more
Professor in the Department of Computer Science at University College Cork and Vice Chair of the High-Level Expert Group on AI (2018 - 2020)read more
Chief Scientist and Managing Director, Science, Technology Assessment, and Analytics at the United States Government Accountabilty Officeread more
Senior Policy Analyst, OECDread more
Strategist and Manager of the Netherlands AI Coalitionread more
Professor at the NYU Wagner Graduate School of Public Service and Co-Founder of the Coleridge Initiativeread more
Professor of Science Communication at Aalborg University and Knowledge broker for Algorithms, Data and Democracy (ADD)read more
Director Trusted Analytics and AI in Control at KPMG, The Netherlandsread more
Innovation Officer and Programme Coordinator Public Tech at the Municipality Amsterdamread more
Responsible Innovation Ethics & Policy Advisor at Googleread more
Policy Officer for Policy, Data and Text Mining at the European Comission, Joint Research Centreread more
President and Co-Founder of ALLAIread more
Lead Data Scientist at Rijkswaterstaat, The Netherlandsread more
Senior Scientist at the Netherlands Scientific Council for Government Policy and Project Lead of the report 'Mission AI. The New System Technology'read more