Search for a command to run...
The increasing impact of artificial intelligence (AI) poses complex challenges to the protection of personal data and the defence of civil rights. The growth of the digital economy, based on the continuous recording and analysis of human behaviour, has increased concerns about privacy and individual autonomy. This study analyses how the main legal frameworks of the European Union (EU), specifically the General Data Protection Regulation and the Artificial Intelligence Act (AI Act), enhance data protection for individuals in the EU, restricting measures and reinforcing ethical standards. Using a qualitative approach, the research combines exploratory analysis of European legislation with comparative analysis of two emblematic cases in different contexts, illustrating the restriction of civil liberties for the same structural reason the control of information and collective behaviour, namely the Cambridge Analytica scandal and the rise of algorithmic censorship mechanisms in India under the pretext of countering foreign interference and disinformation. The aim is to understand how different political contexts and legal architectures respond to the growing tensions between technological innovation and the preservation of fundamental rights. The findings indicate that the Cambridge Analytica case revealed structural limitations in privacy protection systems, reinforcing the need for stronger accountability mechanisms. In contrast, the Indian case illustrates how the deployment of AI can facilitate state surveillance and information control, justified by constitutional articles, undermining freedom of expression and other civil liberties. The choice of these cases is deliberate, as it reveals the differences in political and institutional approaches to AI governance and the mechanisms developed to defend or violate fundamental freedoms. The European Union, by conceiving the right to privacy as a fundamental principle, has created a pioneering global regulatory benchmark, reinforced by the adaptive work of the European Data Protection Board (EDPB). Ultimately, the study argues that only egalitarian, comprehensive and adaptable legal systems can ensure that the development of AI is aligned with democratic principles and respects individual freedoms in contexts of increasing automation.