Program
The program may be subject to changes.
9 September 2021
Opening Addresses | 10:00 – 11:00 |
|
Keynotes | 11:00 – 12:00 | European Approach to Regulating AI
|
Lunch Break | 12:00 – 13:00 | |
Keynotes | 13:00 – 14:30 | Activities of International Organizations
|
Keynotes | 14:30 – 15:50 | Regulating AI from Global Perspective
|
Coffee Break | 15:50 – 16:00 | |
Keynotes – Czech Women in AI Panel (CWAI) | 16:00 – 17:00 | Women in AI and Ethics
|
Closing Remarks | 17:00 – 17:10 | |
10 September 2021
Academic & Business Panel | 10:30 – 11:30 | AI and Cybersecurity
|
Coffee Break | 11:30 – 11:45 | |
Academic & Business Panel | 11:45 – 12:30 | AI in Business from Public Perspective
|
Lunch Break | 12:30 – 13:30 | |
Academic & Business Panel | 13:30 – 14:30 | AI in Business and Finances
|
Coffee Break | 14:30 – 15:00 | |
Academic & Business Panel | 15:00 – 16:30 | AI, Media, Disinformation and Democracy
|
Closing Remarks | 16:30 – 16:40 | |
Program details
Welcome Note by Ondřej Beránek
Opening Address by Věra Jourová
Opening Address by Milena Hrdinková
Opening Address by Petr Očko
Opening Address by Alex Ivančo
European Approach to Regulating AI
On April 21 2021, the Commission proposed new rules and actions aiming to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The proposed rules are based on a future-proof definition of AI and follow a risk-based approach and will be presented in this session.
Creating a Common Approach for Responsible AI Development and Deployment
Artificial Intelligence brings great opportunity, but also great responsibility and we are at that stage with AI where the choices we make need to be grounded in principles and ethics to ensure a future we all want.
In this keynote, Cornelia Kutterer, will share Microsoft’s progress and journey in our effort to operationalize our AI ethical principles into practice. Moreover, the keynote will address Microsoft’s reaction to the European Commission’s AI Act that was drawn from the ways in which our customers are deploying AI, emerging trends in the commercial and research landscapes, and the experience of developing our own internal responsible AI program.
UNCITRAL’s Digital Economy Project: Towards a Next Generation Legal Framework for Digital Trade
The keynote address will provide an overview of UNCITRAL’s ongoing work on legal issues related to the digital economy, how it builds on UNCITRAL’s past work on e-commerce, and how it fits with other international initiatives in the digital space. Ms Joubin-Bret will outline the recent proposal for legislative work on AI and automated contracting, as well as other exploratory work being carried out by the UNCITRAL secretariat on data transactions, online platforms and dispute resolution.
From Principles to Practice with OECD.AI and Globalpolicy.AI
In 2019, OECD member countries and partners adopted the first intergovernmental standard on AI, the OECD AI Principles, provide guidance on how governments and other actors can shape a human-centric approach to trustworthy AI. Two key initiatives are underway at the OECD to implement the AI Principles. The first, the OECD.AI Policy Observatory, concentrates on building and sharing the evidence and data on AI policies, while the second, the OECD.AI Network of Experts focuses on developing tools and practical guidance to facilitate implementation.
AI from the Perspective of Horizontal and Sectoral Legislation: An Update on CAHAI’s Activities in 2021
In our presentation at SOLAIR 2020, we presented the Council of Europe’s activities in relation to artificial intelligence, in general, and introduced CAHAI’s work, in particular. This year, we will update the audience on CAHAI’s first deliverable – its feasibility study – adopted in December 2020, and the draft documents produced by its various working groups in 2021, which should culminate in a final deliverable at the end of CAHAI’s mandate in December 2021.
UNIDROIT’s Digital Assets and Private Law Project – Progress Update on the Working Group and Intersessional Work
The presentation will briefly present the outcomes of the latest Working Group Meeting held in July 2021, along with an overview of the draft Principles as developed through the Working Group sessions and intersessional work.
AI Policy: Moving from Principles to Practice in the US
Policymakers are moving toward the next stage of AI policy development – shifting from principles to practice in sectors across the economy. Translating these important high-level principles into actionable policy is a challenge, but there are new frameworks that MIT and others are developing to support the process at the sector and economy level. In the United States, legislators have proposed or passed a series of bills to enhance AI development, position the US competitively, and address social risks. The responsibilities assigned to different government departments in these bills can illustrate the evolving US approach.
Big Data Legal Issues in Contact Tracing and Vaccine Studies in Israel
Israel has been at the forefront of contact tracing and pharmacovigilance during the current pandemic. This has necessitated collecting huge amounts of data. This ongoing collection raises numerous legal concerns that will be discussed.
Can We Have a Consistent Worldwide Regulation of AI?
Artificial Intelligence is a borderless technology connecting people from various cultures and backgrounds. Given AI’s disruptive potential, new rules need to be set out. International organizations, national governments, and non-profit organizations currently prepare a number of regulatory initiatives that may impact one another and make the AI regulation contradictory and complex. The question is whether such regulation can be consistent and what obstacles we would need to overcome.
Responsible AI: From Principles to Practice
Ensuring the responsible development and use of AI is becoming a main direction in AI research and practice. Governments, corporations and international organisations alike are coming forward with proposals and declarations of their commitment to an accountable, responsible, transparent approach to AI, where human values and ethical principles are leading. Many of the AI risks, including bias, discrimination and lack of transparency can be linked to the characteristics of the data-driven techniques that are currently driving AI development, which are stochastic in nature and rely on the increasing size of datasets and computations. Such approaches perform well in accuracy but much worse in transparency and explanation. Rather than focus on the limitation of risks and safeguard of ethical and societal principles, AI governance should be designed as a stepping-stone for sustainable AI innovation. More than limiting options, governance can be used to extend and improve current approaches towards a next generation of AI: truly human-centred AI. This capacity must be nurtured and supported with strong support for research and innovation in alternative AI methods, that can combine accuracy with transparency and privacy, as well and multi-disciplinary efforts to develop and evaluate the societal and ethical impact of AI. Responsible AI is fundamentally about human responsibility for the development of intelligent systems along fundamental human principles and values, to ensure human flourishing and well-being in a sustainable world.
Artificial Intelligence & Cybersecurity: Benefits and Perils
As a general-purpose, dual-use technology, AI can be both a blessing and a curse for cybersecurity. This is confirmed by the fact that AI is being used both as a sword (i.e. in support of malicious attacks) and as a shield (to counter cybersecurity risks).
Adopting AI in the realm of cybersecurity could lead to significant problems for society if security and ethical concerns are not properly addressed . To mitigate these effects, Policy recommendations to promote AI for cybersecurity and Cybersecurity for AI will be presented.
Cyber Security with AI – From Days to Milliseconds
Cyber security without artificial intelligence can no longer be effective. As attacks are becoming more frequent, sophisticated and harmful, AI is the only way how to protect organizations, users as well as booming number of smart devices. This talk will give insight into how Microsoft is putting AI into effective day-to-day cyber security.
Digital Assets and Private Law – UNIDROIT 2021
Keynote – activities of UNIDROIT (International Institute for the Unification of Private Law). The latest development of the project on Digital Assets and Private Law, approved in 2020. Structure of the project and short information about the future Principles and Legislative Guidance on digital assets.
Consumer-Empowering AI: The Case of Claudette
Recent years have been tainted by market practices that continuously expose us, as consumers, to new risks and threats. AI is, indeed, perceived as something at the service of businesses, and it is currently affecting consumers with regard to multiple dimensions. In this presentation, a paradigm shift is presented, where AI technologies are brought to the side of consumers and their organizations, with the aim of building an efficient and effective counter-power.
The Claudette system will be presented as an example of machine learning-based system, aimed at partially automating the detection of unfairness and unlawfulness in consumers’ contracts and privacy policies.
AI in the Financial Services
In the Search for Ethical and Trustworthy AI Regulation for the Insurance Industry
How Do You Solve a Problem Like Misinformation?
Professor Calo will describe three key distinctions for researchers and policymakers in the burgeoning field of misinformation that can help guide the study and resistance of misinformation.
The U.S. Election Integrity Partnership: Lessons Learned and Next Steps
Haughey will present case study of the non-partisan Election Integrity Partnership, a research consortium that that identified, tracked and responded to voting-specific dis- and misinformation during the U.S. elections. She will also discuss how this relates to her own work equipping journalists with the tools they need to investigate the “misinformation beat”.
Disinformation and the Deterioration of Trust
The rise of mis- and disinformation has contributed to the public’s loss of trust in key democratic institutions. This presentation will explore how loss of trust in key processes and institutions – such as elections and journalism – is fueled by mis- and disinformation, as well as how technology, public policy and multi-stakeholder interventions may be able to counter this trend.