Du er ikke logget ind
Beskrivelse
This book proposes three liability regimes to combat the wide responsibility gaps caused by AI systems vicarious liability for autonomous software agents (actants); enterprise liability for inseparable human-AI interactions (hybrids); and collective fund liability for interconnected AI systems (crowds).Based on information technology studies, the book first develops a threefold typology that distinguishes individual, hybrid and collective machine behaviour. A subsequent social science analysis specifies the socio-digital institutions related to this threefold typology. Then it determines the social risks that emerge when algorithms operate within these institutions. Actants raise the risk of digital autonomy, hybrids the risk of double contingency in human-algorithm encounters, crowds the risk of opaque interconnections. The book demonstrates that the law needs to respond to these specific risks, by recognising personified algorithms as vicarious agents, human-machine associations as collective enterprises, and interconnected systems as risk pools and by developing corresponding liability rules.The book relies on a unique combination of information technology studies, sociological institution and risk analysis, and comparative law. This approach uncovers recursive relations between types of machine behaviour, emergent socio-digital institutions, their concomitant risks, legal conditions of liability rules, and ascription of legal status to the algorithms involved.