Sebastian Szymański, Ph.D.
Sebastian Szymański, Ph.D., is a philosopher specializing in ethics and practical ethics. His research interests are focused on the issue of justice and ethical problems regarding new technologies in particular robotics and AI. He works at the Faculty of „Artes Liberales” of the University of Warsaw, where he is a member of Techno-Humanities Lab. He teaches AI ethics, roboetics, and ethics of new technologies. Sebastian Szymanski is Chairman of the ethics group in the expert team of the Ministry of Digital Affairs for the Polish strategy of AI. He is also a member of Council for Digital Affairs of the Ministry of Digital Affairs where he is Chair of AI Workgroup. Recently he published book The Justifications of the Theory of Justice. The Legacy of John Rawls (Wydawnictwo Naukowe Scholar: 2018) and currently works on the book of AI ethics.
Biases in Algorithms
Machine learning algorithms learn from the training data they are given, this is why algorithms can reflect, or even magnify, the biases that are present in the data.
How can we make sure algorithms are fair, especially when they are privately owned by corporations, and not accessible to public scrutiny? How can we balance openness and intellectual property?
Transparency of Algorithms
The fact that companies won’t allow their algorithms to be publicly scrutinized is worrying from ethical point of view. But even more worrying is the fact that some algorithms are obscure even to their creators.
How can we balance the need for more accurate algorithms with the need for transparency towards people who are being affected by these algorithms? If it’s true that humans are likely unaware of their true motives for acting, should we demand machines be better at this than we actually are?
Supremacy of Algorithms
If we start trusting algorithms to make decisions, who will have the final word on important decisions? Will it be humans, or algorithms?
Fake News and Fake Videos
In this context ethical concern comes up around the topic of (mis)information.
How can we slow the spread of false information, and who will get to decide which news count as ‘true’?
Google, Uber, Tesla and many others are joining this rapidly growing field, but many ethical questions remain unanswered.
As self-driving cars are deployed more widely, who should be liable when accidents happen? Should it be the company that made the car, the engineer who made a mistake in the code, the operator who should’ve been watching? Once self-driving cars are safer than the average human drivers should we make human-driving illegal?
Privacy vs Surveillance
The ubiquitous presence of security cameras and facial recognition algorithms will create new ethical issues around surveillance.
Should there be regulation against the usage of these technologies? Given that social change often begins as challenges to the status quo and civil disobedience, can a panopticon lead to a loss of liberty and social change?
Why Ethics in AI matters?
Obviously, we need to find appropriate legislation for AI in these fields. However, we can’t legislate until society forms an opinion. We can’t have an opinion until we start having these ethical conversations and debates.