A 2022 law designed to combat election-related “misinformation” and “disinformation” in Ireland has been put on hold following objections from Brussels and major technology firms. The legislation, which aimed to grant the state new powers to regulate online content, now requires amendments to align with EU rules, leaving its future uncertain ahead of upcoming elections.
The European Commission, along with industry giants like Google, Meta, and TikTok, has pushed back against the Irish law, arguing that it imposes stricter regulations than the EU’s recently enacted pro-censorship law, the Digital Services Act. In response, the Irish government is now revising the law.
Last October, Brussels formally warned then-Foreign Affairs Minister Micheál Martin that failing to address its concerns could trigger legal action against Ireland. The European Commission insisted that aspects of the law conflicted with EU regulations and reserved the right to initiate pre-litigation proceedings if necessary.
If implemented, the Irish law would introduce criminal penalties for the publication or promotion of electoral “disinformation” and undisclosed bot activity. It would also empower the Electoral Commission to monitor and investigate online misinformation related to elections, compelling platforms to remove misleading content when violations occur.
However, even opposition from Brussels, which has often supported such censorship laws, and tech firms have stalled the process. Lobby group Technology Ireland submitted a detailed objection to the European Commission, arguing that national laws should not impose additional obligations beyond those set by the Digital Services Act. The group contended that Ireland’s proposal is overly burdensome compared to other EU member states, where only EU-wide rules apply.
One key point of contention is a provision requiring tech companies to notify the Electoral Commission if their platforms are being used to spread “disinformation.” While Irish lawmakers see this as a safeguard against election interference, tech firms argue it exceeds reasonable expectations, suggesting they should only be held accountable if they have “actual knowledge” of manipulative behavior on their services.