The pace of technological development in artificial intelligence (AI) has outpaced the legislative capacity of individual countries to keep up. However, with its proposed AI Act, the European Union is attempting to chart a global course for regulating these powerful technologies. This legislation, the first of its kind globally, not only addresses the management of machines but also establishes a risk classification system and imposes strict limitations on uses that threaten human rights or public safety.
Academic analysis of the Act reveals that it is based on the principle of "risk-based assessment." Under this approach, not all AI systems are prohibited, but rather they are categorized: unacceptable uses (such as social ranking systems or mass surveillance in public spaces) are completely banned; high-risk uses (such as recruitment systems or the management of critical infrastructure) are subject to rigorous checks and absolute transparency; and low-risk uses face fewer restrictions. This pragmatic approach aims to balance fostering innovation with protecting citizens.
However, this European model faces significant challenges in its practical application. First, there is the dilemma of "digital frontiers." Artificial intelligence can cross geographical boundaries at the touch of a button. A California-based tech company can offer a service in Paris without having a physical presence there. How does Brussels enforce the law on these entities? The answer lies in financial and market pressure. The EU market is the largest in the world, compelling global companies to comply with European law to avoid losing significant sales. The so-called "Brussels effect" is the Europeans' hope that their law will become the de facto global standard.
Secondly, there is the ongoing debate about the impact of these laws on the competitiveness of European companies themselves. Some criticize that strict regulations could hinder European startups and prevent them from catching up with American and Chinese companies that do not face similar restrictions. Proponents of the law, however, argue that trust is the most important currency in the digital economy, and that consumers will not use technologies they do not trust. Therefore, regulation is actually an incentive for sustainable and secure innovation.
Furthermore, issues of "algorithmic bias" and "transparency" arise. AI systems are trained on historical data that may contain societal or racial biases. The law calls for a degree of transparency regarding the data and algorithms used, but large corporations treat this data as trade secrets. The conflict between a citizen's right to know why a decision is made against them (such as a loan application being rejected) and a company's right to protect its software will be a major legal battleground in the coming years.
From an ethical perspective, the issue is not purely technical but philosophical. Should machines be allowed to make critical decisions? European law tends to answer no, or at least requires a "human in the loop" to review crucial decisions. This position clashes with the trend toward complete automation promoted by some companies seeking quick profits.
The journey to regulate artificial intelligence is a long marathon. The EU law is a bold and necessary step, but it will not be enough on its own. The world needs genuine international cooperation, perhaps within the framework of the United Nations, to establish frameworks that avoid fragmentation of laws and ensure that future technologies serve all of humanity, not become tools in the hands of a few for control and domination.
