Artificial intelligence (AI) technologies are the driving force behind digitization. Due to their enormous social relevance, a responsible use of AI is of particular importance. The research and application of responsible AI is a very young discipline and requires the bundling of research activities from different disciplines in order to design and apply AI systems in a reliable, transparent, secure and legally acceptable way.
The aim of this PhD project is to investigate distortion in social networks such as Twitter and its effect on opinion formation and change of opinion. This question is a central challenge for many applications, from online surveys to the placement of advertising. UGC (User Generated Content) is very subjective and often reflects a wide variety of distortions and prejudices. Furthermore, in social networks, users are offered or denied content by AI algorithms based on user information such as their location, click behavior and search history. The result is isolation in cultural or ideological bubbles (filter bubble). In principle, service providers have the possibility to prefer or suppress certain opinions, e.g. in politics, economics or on migration issues. Research into UGC is also affected by bias: Only about 1% of all tweets on Twitter are available for UGC research on Twitter. Studies show that bias also occurs here. The identification of bias in opinion formation and change of opinion is highly complex. Therefore, this PhD project investigates the effects of bias on opinion formation and change of opinion. The main focus will be on how opinions are formed and how users’ opinions change over time.