› Biases in Algorithms
› Transparency of Algorithms
› Supremacy of Algorithms Fake News and Fake Videos
› Self-driving Cars
› Privacy vs Surveillance
› Why Ethics in AI matters?
AI is as prone to bias as humans, but luckily the biases in algorithms can also be diagnosed and treated. The algorithms are as good as the data used to train them. If we want to have control over AI bias, we need to provide data of good quality. How to ensure that the data used in AI training do not distort reality? Why do we need openness and transparency in collecting data and creating algorithms?
The fact that companies won’t allow their algorithms to be publicly scrutinized is worrying from ethical point of view. But even more worrying is the fact that some algorithms are obscure even to their creators. How can we balance the need for more accurate algorithms with the need for transparency towards people who are being affected by these algorithms? If it’s true that humans are likely unaware of their true motives for acting, should we demand machines be better at this than we actually are?
If we start trusting algorithms to make decisions, who will have the final word on important decisions? Will it be humans, or algorithms?
Spreading fake news and fake videos undermines the trust necessary for effective cooperation. We urgently need solutions to distinguish misinformation from real and trustworthy communication. How can we slow the spread of false information? How can AI help us recognize and eliminate fake news?
Google, Uber, Tesla and many others are joining this rapidly growing field, but many ethical questions remain unanswered. As self-driving cars are deployed more widely, who should be liable when accidents happen? Should it be the company that made the car, the engineer who made a mistake in the code, the operator who should’ve been watching? Once self-driving cars are safer than the average human drivers should we make human-driving illegal?
The ubiquitous presence of security cameras and facial recognition algorithms will create new ethical issues around surveillance. Should there be regulation against the usage of these technologies? Given that social change often begins as challenges to the status quo and civil disobedience, can a panopticon lead to a loss of liberty and social change?
Obviously, we need to find appropriate legislation for AI in these fields. However, we can’t legislate until society forms an opinion. We can’t have an opinion until we start having these ethical conversations and debates.
Sebastian Szymański, Ph.D., is a philosopher specializing in ethics and practical ethics. His research interests are focused on the issue of justice and ethical problems regarding new technologies in particular robotics and AI. He works at the Faculty of “Artes Liberales” of the University of Warsaw, where he is a member of Techno-Humanities Lab. He teaches AI ethics, roboetics, and ethics of new technologies. Sebastian Szymanski is Chairman of the ethics group in the expert team of the Ministry of Digital Affairs for the Polish strategy of AI. He is also a member of Council for Digital Affairs of the Ministry of Digital Affairs where he is Chair of AI Workgroup. Recently he published book The Justifications of the Theory of Justice. The Legacy of John Rawls (Wydawnictwo Naukowe Scholar: 2018) and currently works on the book of AI ethics.