Ai-Driven Autonomous Vehicles And Legal Liability: Redefining Accountability In Human-Ai Collaborative Systems
DOI:
https://doi.org/10.64252/3ffewb32Keywords:
Microlearning, Elementary School Mathematics, Educational Materials, Instructional Design, Student Engagement, Pedagogical ToolsAbstract
AI-powered autonomous vehicles (AVs) have led to discussions about who is responsible and answerable for problems that arise in human-AI collaborations. The research focuses on how the decisions of machine learning algorithms in AVs are interpreted by the law. With the use of Decision Trees, Random Forests, Support Vector Machines and Deep Neural Networks, this research examined how driver behavior is predicted in AVs across a controlled set of scenarios. Of all the algorithms, the Deep Neural Network showed the best accuracy at 94.5%, then Random Forest at 91.2% and the other two fell below at 87.6% and 83.4%. Yet, accuracy was negatively linked to interpretability and Decision Trees were the model that offered the easiest trace of logic. Learning from both comparative experiments and recent law and ethics writings, the discipline points out that the law is currently unable to handle situations where a decision is made independently by a non-human agent. The results suggest putting in place a model in which developers, manufacturers and AI systems are all held accountable. The research introduces a system that can guide policymakers in writing future regulations for AI technology in mobility.