Misinformation, Disinformation, Hate Speech and Publication of Other Information (MDHI) Bill, 2025
Watch: AI-Powered Analysis
Watch an AI-generated discussion breaking down this bill in plain language, exploring its key provisions and potential impacts on Ghana's digital innovation sector.
Executive Summary
The Misinformation, Disinformation, Hate Speech and Publication of Other Information (MDHI) Bill, 2025 establishes a comprehensive regulatory framework to combat false information and harmful speech in Ghana. The bill creates a new Division under the National Communications Authority with broad investigative and adjudicatory powers (9, 11), prohibits the publication of misinformation (false information regardless of intent), disinformation (intentionally misleading information), and hate speech (22, 36), and imposes mandatory verification requirements on publishers, with heightened standards for media outlets and politicians (23). The legislation applies to government officials, public institutions, private individuals, and entities, with enforcement mechanisms ranging from correction orders to criminal penalties of up to 500 penalty units and one month imprisonment for malicious misinformation causing public harm (75).
Digital Innovation and Business Environment: The bill imposes significant compliance obligations on media outlets, online platforms, and content creators. All covered entities must conduct annual human rights audits of their algorithms and content moderation practices (80), perform yearly misinformation risk assessments (81), establish fact-checking departments (82), and provide bi-annual training on false information (83). Licensed entities must obtain fact-checking certification and complete two years of training for license renewal (82, 83). Non-compliance triggers warnings followed by financial penalties, with the Division empowered to recommend license suspension or revocation after three warnings (71). These requirements create substantial operational costs and administrative burdens, particularly for smaller media organizations and startups. However, the bill provides safe harbor protections for internet intermediaries, shielding them from liability for user-generated content they did not create or modify, and explicitly stating they have no general obligation to monitor content (77). Content restriction orders can only be issued against intermediaries in limited circumstances involving diplomatic harm and after the original publisher has failed to comply (79).
Freedom of Speech and Expression: The bill includes constitutional safeguards requiring interpretation that favors freedom of speech, expression, and privacy (4, 6). Opinions, commentary, and good-faith interpretations are explicitly excluded from the definition of false information (17), and public criticism of government officials and dissatisfaction with public services are protected (17). The bill recognizes a public interest defense, protecting disclosure of information revealing criminal activity, government misconduct, civil wrongdoing, or controversial public health opinions (6). Individuals who quickly correct false statements, retract them, and apologize receive legal protection (35). However, the framework grants the government broad authority to pursue misinformation claims against critics, though it cannot act solely on insults to officials or regarding ruling party matters (26). The Division—whose Director is appointed by the President (14)—has initial adjudicatory power over most complaints, with judicial review available but only after administrative proceedings (60). The broad definitions of key terms like "public interest" (25), "hate speech" (37), and "false information" (19) create uncertainty about what speech is permissible, potentially encouraging self-censorship.
Privacy and Data Rights: The bill prohibits disclosure of private facts—intimate details about personal life, family, health, finances, and relationships not widely known (45, 46)—and specifically bars mass media from using private information for entertainment purposes, including in parodies and satires (48). Private individuals, government officials, politicians, and celebrities can all pursue claims for unauthorized disclosure of private facts (49). The bill also criminalizes publication of confidential government information affecting public security, welfare, or diplomacy, including Cabinet communications, closed-door meeting details, and sensitive economic data (52, 53). While the legislation preserves existing remedies under the Data Protection Act (51), the expansive definition of private facts and confidential information raises concerns about investigative journalism and whistleblower protections. The public interest defense (6) may protect some disclosures, but the burden of proof and the Division's discretion in applying this defense create uncertainty for journalists and civil society organizations seeking to expose wrongdoing.
Impact Analysis
Digital Innovation
The MDHI Bill creates a challenging regulatory environment for digital innovation in Ghana through extensive compliance requirements and enforcement mechanisms that will significantly increase operational costs and business risk. The legislation imposes mandatory annual human rights audits of algorithms and content moderation practices (80), **yearly...
Freedom of Speech
The Misinformation, Disinformation, Hate Speech and Publication of Other Information (MDHI) Bill, 2025 poses severe threats to freedom of speech and expression in Ghana despite including constitutional safeguards. The bill creates a powerful regulatory apparatus under the National Communications Authority, headed by a Presidential appointee (14), with...
Privacy & Data Rights
The Misinformation, Disinformation, Hate Speech and Publication of Other Information (MDHI) Bill, 2025 creates a restrictive privacy framework that significantly constrains investigative journalism and public accountability mechanisms while expanding government control over information disclosure. The bill prohibits disclosure of "private facts"—broadly defined to include intimate...
Business Environment
The Misinformation, Disinformation, Hate Speech and Publication of Other Information (MDHI) Bill, 2025 creates a highly restrictive regulatory environment for businesses operating in Ghana's media, digital, and communications sectors. The legislation imposes extensive compliance obligations that will significantly increase operational costs and administrative burdens, particularly for...
Critical Issues with This Bill
These concerns pose significant risks to Ghana's digital innovation ecosystem
Reversed Burden Silences Health Critics
Section 30(6) reverses the burden of proof, requiring accused persons to prove their health statements are "true or accurate" rather than requiring the government to prove falsity. This violates fundamental due process principles and creates a chilling effect on legitimate public health discourse—medical professionals discussing emerging treatments, journalists reporting on health policy controversies, or citizens criticizing government pandemic responses must now prove their statements meet undefined "accuracy" standards or face criminal penalties. The provision particularly threatens political speech about public health policy (vaccination mandates, lockdowns, treatment protocols), allowing government to criminalize policy criticism by forcing critics to prove their alternative views are "accurate"—an impossible standard for contested scientific and policy debates.
Reversed Burden Chills Election Speech
This provision reverses the burden of proof for election-related speech, requiring accused speakers to prove their statements true rather than requiring the government to prove falsity (subsection 8). Combined with the vague "likely to influence" standard and broad scope covering Electoral Commission information, voting processes, results, and candidate scandals, this creates severe chilling effects on political discourse. Speakers face criminal penalties unless they can document every election-related statement, discouraging investigative journalism about candidates, commentary on electoral irregularities, and democratic participation—contrary to international human rights standards protecting freedom of expression.
Vague Hate Speech Ban
This provision prohibits "hate speech" defined so broadly (37) that it captures factual statements, satire, and speech that merely "promotes negative feelings" toward protected groups. The definition eliminates intent requirements and includes entertainment content, going far beyond international standards requiring incitement to imminent violence. Speakers cannot reliably determine what is prohibited, creating a severe chilling effect on legitimate political debate, religious discourse, social commentary, and artistic expression.
Strict Liability for Truthful Speech
This provision eliminates intent requirements for hate speech liability, meaning speakers can be punished even when they had no intention to cause harm. It explicitly criminalizes factual statements that "incite hatred" toward groups, departing from democratic norms that protect truthful speech. The definition captures satire, parody, and artistic expression as hate speech if they affect a group's "dignity"—a highly subjective standard that creates legal uncertainty about what speech is permissible and encourages self-censorship on controversial topics.
Criminalizes Speech Without Incitement Requirement
Section 42 prohibits "indecent expressions" that explicitly do NOT incite hatred or violence, yet still trigger criminal penalties. The provision bans speech that "may reasonably provoke violence"—a vague, subjective standard based on hypothetical audience reactions rather than actual incitement. This falls well below international free speech norms (ECHR Article 10, ICCPR Article 19) which require narrow definitions, direct causation, and imminent harm. Undefined terms like "ethnic slurs and derogatory commentary" create uncertainty about what speech is permissible, potentially capturing legitimate criticism of ethnic groups' policies if expressed sharply. The Division—headed by a Presidential appointee—has discretionary power to determine what "may reasonably provoke violence," creating severe chilling effects on speech about ethnicity and group-related issues.
Compelled Speech Without Knowledge
This provision requires individuals to publish government-mandated corrections even when they "does not know or has no reason to believe that the information is false" (Section 63(4)). This creates strict liability for speech—the Division can compel you to publicly endorse its version of "truth" regardless of your good faith, intent, or reasonable belief. You must publish corrections "in the specified form and manner" at your own cost, potentially including newspaper publication when "consequences are extreme" (undefined standard). This fundamentally violates freedom of expression by forcing affirmative speech without requiring knowledge or intent, determined by a presidentially-appointed administrative body rather than courts, and applying to the bill's vague definitions of misinformation and hate speech.
Speech Suppression Without Fault
Section 64(6) permits Stop Communication Directions to be issued "even if the person does not know or has no reason to believe that the information is false"—establishing strict liability for speech restrictions. This eliminates the fault requirement fundamental to free speech protections in democratic societies, where restrictions typically require at minimum negligence or recklessness. Combined with the undefined "substantially similar" standard in s.64(3), speakers face orders to cease not only specific statements but also any related content the Division or Court deems similar, creating a chilling effect that encourages self-censorship. The provision functions as prior restraint, preventing future speech rather than addressing past harm, with initial adjudication by the Division (an executive body) rather than courts.
Strict Liability Removal Orders Suppress Speech
This provision authorizes removal orders that force speakers to delete online content without requiring proof they knew the information was false or constituted hate speech—creating strict liability for speech suppression. The order extends beyond the specific content to "any statement or material that is substantially similar," a vague standard that chills protected expression by leaving speakers uncertain what speech is prohibited. Combined with correction and stop communication orders, speakers face cumulative restrictions all based on strict liability, violating the principle that content-based speech restrictions require intent or negligence. The provision lacks explicit pre-enforcement hearing requirements, enabling government-ordered de-platforming before speakers can defend their expression in court.
Executive Prior Restraint on Speech
This provision allows the Division—an administrative body appointed by the President—to issue orders immediately halting publication activities without prior judicial review, notice, or opportunity to be heard. Violating these orders triggers automatic penalties without warning (bypassing the three-warning system in 71), and can be issued against anyone "deemed to be engaged" in publishing "false or other information"—an extraordinarily vague standard. This creates a mechanism for executive prior restraint on speech, where government-appointed officials can silence speakers before constitutional defenses can be raised, fundamentally undermining freedom of expression and democratic discourse.
Vague "Public Harm" Definitions Criminalize Protest
The provision criminalizes misinformation causing "public harm, violence, fear, unrest or public disturbance" with penalties up to 500 penalty units and 1 month imprisonment. However, the definitions are dangerously vague: "fear" includes "anxiety about the administration...of a public institution," "unrest" includes "widescale protests" and "agitation," and "public disturbance" includes "widespread anxiety about change in public policy." These definitions fail the rule of law requirement of legal certainty—speakers cannot reasonably predict what speech is criminal. More fundamentally, they criminalize core political speech: criticism that sparks lawful protests, creates concern about government management, or generates anxiety about policy changes. This violates international standards (ICCPR, ECHR) requiring criminal speech restrictions be narrowly defined and proportionate, creating severe chilling effects on democratic discourse and government accountability.
Pre-Publication Verification as Prior Restraint
This provision requires all speakers—media houses, journalists, content creators, and influencers—to fact-check "before publishing information," creating a prior restraint mechanism fundamentally at odds with freedom of speech principles. The provision provides no standards for what constitutes adequate fact-checking, who determines sufficiency, or how the Division applies certification requirements, creating legal uncertainty that encourages self-censorship. For licensed entities, fact-checking certification becomes a licensing prerequisite (83 compounds this by requiring two years of training), giving the Division (headed by a President-appointed Director) discretionary gatekeeping power over who can speak. This mandatory pre-publication verification is practically impossible for real-time commentary, breaking news, and social media, effectively prohibiting certain forms of protected speech.
Administrative Speech Adjudication Without Due Process
This provision grants the Division binding authority to determine speech legality and impose sanctions (10.2) without establishing essential procedural safeguards. The Division—whose Director is presidentially appointed—can render final determinations on whether speech constitutes misinformation, disinformation, or hate speech before any judicial review occurs. The provision fails to specify notice and hearing requirements, evidentiary standards, or transparency obligations for the Division's internal rules (10.1). This creates a system where speech restrictions are initially imposed by an administrative body rather than an independent court, with judicial review available only after binding decisions are made (60). The lack of procedural protections, combined with the bill's broad definitions, creates significant risk of arbitrary enforcement and chilling effects on constitutionally protected expression.
Opaque Regulatory Decision-Making Threatens Business Planning
The Division can publish its own internal rules and render binding decisions on sanctions and remedies without establishing transparency requirements or clear procedural standards. This creates regulatory unpredictability for businesses subject to the bill's extensive compliance obligations—businesses cannot effectively assess compliance costs or plan operations when enforcement standards can change through unpublished internal rules and binding decisions lack procedural constraints. The combination of self-directed rule-making authority with binding adjudicatory power over business operations undermines the regulatory certainty necessary for commercial planning and investment decisions.
Broad Enforcement Discretion Creates Regulatory Uncertainty
The Division receives sweeping compliance monitoring and enforcement powers (11(1)(a), (f)) to implement the bill's extensive obligations—annual audits, risk assessments, fact-checking departments, and training—but the provision provides no clear enforcement standards, timelines, or procedural safeguards. This creates significant operational uncertainty for platforms, media outlets, and content creators who must invest in compliance without knowing how the Division will interpret requirements or exercise its binding adjudicatory authority. The Division's precedence over other regulatory bodies (per 12) further concentrates regulatory power without corresponding clarity, creating an unpredictable environment that discourages digital innovation and disproportionately burdens startups and smaller platforms.
Executive-Controlled Speech Adjudication Power
The Division—headed by a presidential appointee (14)—combines investigative, prosecutorial, and adjudicatory functions over speech-related complaints, making binding decisions on liability and sanctions without prior judicial oversight. This concentration of power within an executive-controlled entity deviates from rule of law principles requiring independent adjudication of speech restrictions. While the provision mandates promotion of "freedom of speech and expression" and parliamentary reporting, the structural design creates conditions for potential abuse, particularly when enforcing the bill's broad definitions of prohibited speech.
Privacy Investigations Lack Due Process Protections
The Division receives broad investigative and adjudicatory powers over privacy complaints (11(1)(d)-(e)) without explicit procedural safeguards for investigation subjects. The provision does not specify whether journalists, whistleblowers, or others facing privacy investigations receive notice, opportunity to respond, or clear evidentiary standards before the Division makes binding liability determinations. Given the bill's expansive definitions of private facts (45) and confidential information (52), this procedural gap could enable investigations that chill legitimate reporting on government misconduct, with judicial review available only after administrative proceedings conclude (60).
Enforcement Discretion Without Procedural Standards
The Division receives broad investigative and adjudicatory powers to "ensure and monitor compliance," "investigate Complaints or Reports," and "establish liability and impose sanctions" - all as binding decisions. However, the provision provides no procedural standards for investigations, no timelines for decisions, no evidentiary requirements, and no clarity on what constitutes adequate compliance. This creates significant regulatory uncertainty for businesses subject to the bill's compliance obligations, making it impossible to predict enforcement actions or plan compliance strategies. The lack of procedural constraints on the Division's enforcement discretion increases operational risk and compliance costs across all business types.
Division Overrides Independent Media Watchdog
This provision grants the Division supremacy over the National Media Commission, an independent constitutional body established to protect press freedom. When their roles overlap, "the functions of the Division shall prevail"—subordinating an independent constitutional institution to an executive agency whose Director is appointed by the President (14). This eliminates the institutional check that the National Media Commission provides against executive overreach in regulating speech and media, concentrating all authority over misinformation, disinformation, and hate speech in a single executive-controlled entity with no independent counterweight.
Investigator-Judge Fusion Threatens Fair Speech Adjudication
The Complaints and Investigation Subdivision combines investigative and adjudicatory powers in a single body that can initiate investigations "on its own accord" and make "binding decisions" about speech violations. This violates the fundamental principle that investigators should not judge their own findings. Without separation between those who investigate alleged misinformation and those who decide guilt, speakers face a body that acts as prosecutor, investigator, and judge simultaneously—creating inherent bias and denying fair adjudication of speech restrictions.
Presidential Appointee Judges Speech
The Division Director—who makes binding decisions on misinformation, disinformation, and hate speech complaints—is appointed solely by the President without parliamentary confirmation, independent commission involvement, or removal protections. This creates a structural conflict where an executive appointee adjudicates speech about the government, including complaints by government against critics. Unlike independent regulatory bodies in democratic systems (which use multi-member boards, fixed terms, and legislative oversight), this concentrates adjudicatory power over speech in a single presidential appointee, creating risk of politically-motivated enforcement and undermining the neutrality essential for speech regulation.
Algorithmic Content Excluded from Safe Harbor
Section 18(4) excludes "algorithmically generated information" from intermediary safe harbor protections, creating liability exposure for platforms' recommendation algorithms, content ranking, search results, and AI-generated content. This carve-out goes beyond international norms (EU DSA, US Section 230) and imposes impossible compliance burdens—platforms cannot realistically verify the truth/falsity of every algorithmically ranked or recommended piece of content under 19's broad definition. The undefined term "algorithmically generated information" creates uncertainty about what features trigger liability, chilling AI innovation and disproportionately burdening startups with limited compliance resources.
Undefined Algorithmic Liability Barrier
Section 18(4) excludes "algorithmically generated information" from safe harbor protections for internet intermediaries, creating liability exposure for platforms using AI or algorithmic systems. The term is undefined—leaving platforms uncertain whether it covers AI-generated content only, or also algorithmic curation, search rankings, and recommendation feeds. Combined with 19's broad definition of false information, this creates impossible compliance burdens: platforms cannot realistically verify every algorithmically processed piece of content against truth standards. This diverges from international norms (EU DSA, US Section 230) and creates disproportionate barriers for startups and smaller platforms lacking extensive compliance resources, potentially deterring digital business investment in Ghana.
Vague "Misleading" Standard Chills Speech
This provision defines false information using subjective terms like "misleading" and "deceptive" alongside objective falsity, creating legal uncertainty about what speech is prohibited. While the burden of proof lies with the accuser (a protective safeguard), the inclusion of "misleading" as a standalone category deviates from international best practice—democracies typically focus false information laws on objectively verifiable falsehoods, not subjective judgments about presentation. The "partial truth doctrine" (19.3) is particularly problematic: determining whether an omission makes something "more misleading than true" requires highly subjective judgment and could chill legitimate selective reporting and investigative journalism. Speakers cannot reliably predict whether factually accurate statements will be deemed "misleading," undermining legal certainty and enabling potentially arbitrary enforcement.
Strict Liability for Honest Mistakes
This provision prohibits misinformation (false information "regardless of the intention to mislead") and disinformation (false information intended to mislead). The critical problem is the strict liability standard for misinformation - people can be held liable for unintentional falsehoods even when they genuinely believed the information was true. This deviates from international democratic norms where liability for false speech typically requires at least negligence or recklessness. Combined with broad definitions of "false or inaccurate information" and "prejudicial to public interest," this creates a chilling effect on legitimate speech - journalists reporting on developing stories, academics presenting preliminary findings, citizens sharing information they reasonably believed accurate, and whistleblowers disclosing information they believe reveals wrongdoing all face potential liability for honest mistakes. While 23 provides a defense if due diligence "could not have revealed" the falsehood, this creates a practical burden to conduct verification or face liability.
Undefined Verification Standards Burden Innovation
This provision requires all publishers to conduct "necessary due diligence" to verify information accuracy, with "higher standards" for media houses, journalists, influencers, commentators, celebrities, brands, and multinational companies—but neither term is defined. Digital platforms, media startups, and content creators cannot determine what conduct satisfies the law, creating impossible compliance challenges for businesses operating at scale. The tiered system penalizes growth: successful digital creators face heightened obligations precisely when they achieve market success. Combined with the annual audits, risk assessments, and fact-checking requirements elsewhere in the bill (80, 81, 82), this creates substantial compliance costs and legal uncertainty that will drive over-compliance, stifle innovation in digital media, and create significant barriers to market entry for startups.
Two-Tier Speech System Burdens Public Discourse
This provision creates undefined "higher standards" for journalists, academics, politicians, and public commentators—precisely those engaged in democratic debate—while ordinary citizens face lower verification burdens for identical statements. The terms "necessary due diligence" and "higher standard" are undefined, making it impossible for speakers to know what conduct satisfies the requirement. Combined with criminal penalties under 75, this creates a chilling effect on investigative journalism and scholarly commentary, as public discourse participants face greater legal jeopardy than private citizens for the same speech. While defenses exist, speakers must risk liability first and defend later, fundamentally disadvantaging those most important to democratic discourse.
Undefined "Higher Standard" Burdens Business
This provision subjects "popular product brands and multinational companies" to an undefined "higher standard of due diligence" when publishing factual information, without specifying what this standard requires. Businesses cannot determine what verification processes satisfy compliance, creating legal uncertainty for corporate communications, marketing, and product information. Combined with 24's prohibition on publishing false information "for financial reward," this creates overlapping requirements that burden commercial speech with undefined compliance obligations and potential liability under criminal penalties.
Presumption of Guilt for Reputation
Section 24(3) creates a rebuttable presumption that anyone who "earns a reputation publicly for constantly and incessantly publishing false information" is engaged in business misinformation. This reverses the burden of proof, requiring the accused to prove innocence rather than the state proving guilt—violating the presumption of innocence principle. The triggering criteria are dangerously vague: "earn a reputation," "constantly and incessantly," and "affects public interest" lack clear thresholds, creating legal uncertainty. This could capture legitimate journalists whose investigative reporting is disputed, satirists whose work is mischaracterized, or political commentators critical of government. Combined with the 25's broad "public interest" definition and potential license revocation sanctions, this provision creates a severe chilling effect on investigative journalism and political speech.
Vague Commercial Liability Creates Market Uncertainty
Section 24(3) creates a rebuttable presumption that persons who "earn a reputation publicly for constantly and incessantly publishing false information" are engaged in the business of misinformation. This provision imposes legal uncertainty on media companies, content platforms, and verification services because "constantly," "incessantly," and "earn a reputation" lack objective thresholds. Businesses cannot assess compliance risk or structure operations to avoid liability. The reversed burden of proof requires companies to prove they are NOT engaged in commercial disinformation—an expensive defense that deters market entry and investment, particularly for startups in fact-checking and content moderation. This approach deviates from international standards (EU Code of Practice, OECD guidelines) which address commercial disinformation through transparency and disclosure rather than subjective presumptions.
Vague "Public Interest" Enables Speech Suppression
This provision defines when misinformation liability applies by requiring enforcement to be "in the public interest"—but the definition is dangerously vague. Most problematic is 25(2)(f), which allows enforcement to prevent "diminution of public confidence" in government institutions. This standard is so broad it could encompass virtually any criticism of government performance, policy failures, or institutional conduct. Combined with undefined terms like "public trust" and "friendly relations with other countries," the provision gives the Division (appointed by the President) sweeping discretion to determine what speech serves "public interest." This creates severe chilling effects on investigative journalism, opposition criticism, and civil society advocacy—speakers cannot clearly determine what is permissible when any government criticism could be framed as affecting "public confidence."
Government as Judge and Prosecutor
This provision allows the government to pursue misinformation claims against critics through a Division whose Director is appointed by the President (14), creating a structural conflict where government-aligned officials make initial determinations about whether criticism of government is false. While safeguards exist—government cannot enforce claims about the ruling party or based solely on insults to officials, and bears the burden of proof—these do not address the institutional design flaw. Combined with the broad "public interest" categories in 25 (protecting "public trust," "public welfare," preventing "diminution of public confidence"), this creates uncertainty about what government-critical speech is permissible, encouraging self-censorship. Judicial review is available only after administrative proceedings (60), meaning the government-controlled body decides first.
Unconstrained Institutional Censorship Power
This provision grants public institutions enforceable rights to pursue misinformation claims without the safeguards imposed on the Government in 26. While the Government cannot pursue claims about ruling party matters or based solely on insults, no such limitations apply to public institutions—an undefined category that could include regulatory bodies, state enterprises, security agencies, and other executive entities. This creates a mechanism for unelected state actors to suppress criticism without democratic accountability, particularly threatening investigative journalism exposing institutional corruption and civil society monitoring of public services.
Officials Weaponize Act Against Critics
This provision grants government officials, public officers, judiciary members, and election candidates enforceable rights to pursue misinformation claims against critics, creating severe power asymmetry. Officials can leverage institutional resources and political backing to pursue claims while critics face resource constraints defending themselves. The vague definition of "candidate" (including anyone "publicly known to contest") extends protection to potential candidates before formal declaration, shrinking space for pre-election scrutiny. By allowing officials to pursue claims in both official and personal capacity, the provision enables reframing legitimate policy criticism as personal misinformation, circumventing limits on government claims about "insults."
Unlimited Private Litigation Risk
This provision grants any private individual or entity standing to initiate misinformation proceedings against digital platforms, content creators, and online publishers. Combined with the bill's broad definition of misinformation (false information regardless of intent) and administrative adjudication by a presidentially-appointed Division, this creates significant litigation exposure for digital businesses. Startups, content platforms, and tech entrepreneurs face unpredictable legal costs and operational uncertainty, as there are no anti-SLAPP protections or mechanisms to quickly dismiss frivolous claims—chilling innovation in Ghana's digital economy.
Private Censorship Through Litigation
This provision enables any private individual or entity to initiate misinformation proceedings against any speaker, creating a privatized censorship mechanism with significant chilling effects. Unlike defamation law with its developed safeguards, this operates within a framework where "misinformation" includes unintentional inaccuracies and where a presidentially-appointed Division has initial adjudicatory power (60). Well-resourced entities can weaponize this provision to silence critics, competitors, or journalists through strategic litigation—even unsuccessful claims impose costs and reputational harm, discouraging legitimate speech about matters of public concern.
Commercial Weaponization of Misinformation Claims
This provision grants unlimited standing to private individuals and entities to pursue misinformation claims against "any person," creating a powerful tool for competitive litigation warfare. Businesses face claims from competitors over comparative advertising, market analysis, product reviews, or competitive intelligence—with no apparent anti-SLAPP protections or clear thresholds for frivolous claims. The Division's initial adjudicatory role means businesses must navigate costly administrative proceedings before reaching judicial review, creating substantial defensive litigation costs that disproportionately burden smaller enterprises and startups unable to maintain legal defense budgets.
Reversed Burden Kills Health Innovation
This provision reverses the burden of proof (30(6)), requiring anyone accused of publishing "false or inaccurate" health information to prove their statements are true—rather than requiring accusers to prove falsity. For digital health startups, telemedicine platforms, wellness apps, and health content creators, this creates impossible compliance barriers. A mental health app discussing coping strategies, a nutrition influencer sharing dietary advice, or a fitness platform recommending supplements could face criminal liability unless they can affirmatively prove accuracy—a standard that's particularly devastating for evolving medical science where consensus changes. Combined with vague standards like "unverified" and "unsubstantiated" (30(2)) and undefined fact-checking requirements (30(5)), this provision forces digital health businesses to either avoid health claims entirely or maintain expensive legal/medical teams to defend every statement, effectively killing innovation in Ghana's digital health sector.
Health Sector Faces Unprovable Accuracy Standard
This provision requires businesses in health sectors (pharmaceuticals, medical devices, wellness, telemedicine) to prove the accuracy of any public health statement if challenged, reversing the normal burden of proof. Combined with vague standards like "unverified" and "unsubstantiated," this creates unpredictable legal liability that cannot be adequately managed through compliance programs. A competitor or government official can trigger proceedings forcing a business to prove accuracy of marketing claims, product descriptions, or public statements—an impossible standard for emerging therapies, evolving medical consensus, or forward-looking statements. This deters investment in Ghana's health sector and creates competitive disadvantage versus businesses in neighboring countries operating under normal legal standards.
Election Speech Compliance Burden
This provision requires all content creators, influencers, and media outlets to conduct mandatory fact-checking of election information according to Division-prescribed guidelines, while placing the burden of proof on publishers to prove accuracy of any challenged election content. The undefined standard of information "likely to influence" election outcomes creates operational uncertainty, making it impossible for digital platforms to develop clear moderation policies. Combined with potential liability for candidate information and foreign coordination restrictions, this creates prohibitive compliance costs that favor established media organizations over digital startups, independent journalists, and innovative platforms—particularly during election periods when political engagement is highest.
Proof Burden Bars Small Media
Subsection (8) reverses the burden of proof, requiring publishers to prove the truth of election information rather than accusers proving falsity. Combined with the vague "likely to influence" standard in subsection (1), this creates asymmetric litigation risk that smaller media outlets, independent journalists, and content creators cannot afford to manage. The mandatory fact-checking requirements in subsection (7) apply even to individual "influencers and content creators," imposing institutional compliance costs on non-institutional actors. This consolidates election coverage in the hands of well-resourced legacy media, creating barriers to market entry and reducing competition in political journalism.
Accurate Facts Prohibited by Tone
This provision prohibits media houses from publishing "otherwise accurate information" when "substantial embellishments" cause it to "become inaccurate" through emotional impact alone—even though the underlying facts remain true. The test relies on subjective judgments about whether information is "overly exaggerated" and evokes emotions that the facts do not "reasonably evoke." This creates a fundamental conceptual problem: factually accurate information cannot become factually inaccurate through presentation style. The vague standards—combined with potential license suspension after warnings (71)—threaten core journalistic practices of using emphasis, dramatic headlines, and narrative framing to highlight newsworthy aspects of factual stories, creating a chilling effect on legitimate journalism based on subjective government judgments about appropriate emotional tone.
Global Speech Regulation Burden
This provision allows Ghana to regulate speech made anywhere in the world by anyone, so long as it targets a Ghanaian citizen—creating an unprecedented compliance burden for digital platforms and content creators globally. While non-hate speech violations require a 2-year residency connection (subsection 3), hate speech can be prosecuted extraterritorially regardless of the speaker's location or connection to Ghana (subsection 6). This means a startup in Silicon Valley, a journalist in London, or a blogger in Lagos could face Ghanaian enforcement for content deemed "hate speech" under Ghana's broad definition, with no opportunity to cure the violation through correction or apology (unlike misinformation under 35). This exceeds international norms and creates operational uncertainty that chills innovation and legitimate global discourse about Ghanaian affairs.
Extraterritorial Hate Speech Regulation
This provision allows Ghana to prosecute hate speech made anywhere in the world by anyone—regardless of nationality or connection to Ghana—as long as it targets a Ghanaian citizen (subsection 6). This exceeds international norms for extraterritorial jurisdiction and creates a chilling effect on global speech about Ghanaian affairs. Foreign journalists, activists, and commentators discussing sensitive topics could face prosecution under Ghana's broad hate speech definition (37), even if their speech is lawful where they are located. Unlike other violations, the hate speech provision has no residency threshold and cannot be cured through the correction/retraction defenses available under 35, leaving foreign speakers without meaningful safeguards.
Unpredictable Global Business Liability
This provision subjects any business globally to Ghanaian enforcement if their content targets Ghanaian citizens, regardless of where the business operates or whether it has any connection to Ghana. The hate speech exception (subsection 6) is particularly problematic: unlike other violations which require the offender to be Ghanaian or a 2-year resident, hate speech claims apply to anyone, anywhere. For digital platforms, content creators, and media businesses, this creates unpredictable liability exposure and forces costly compliance with Ghanaian law even when operating entirely outside Ghana. The broad definition of "hate speech" under 37 compounds this uncertainty, making it difficult for businesses to know what content is permissible and creating barriers to market entry for startups and smaller platforms.
Vague Hate Speech Definition Chills Innovation
The definition uses subjective terms like "negative feelings," "hostility," and "attitudes" without clear boundaries, making it impossible for digital platforms and content creators to know what speech is prohibited. The removal of intent requirements means platforms face strict liability for user content, while the explicit inclusion of entertainment content (movies, songs, parodies, satire) directly targets Ghana's growing digital creative economy. This vagueness forces platforms to over-moderate content to avoid liability, increases compliance costs dramatically, and makes Ghana's digital ecosystem less competitive compared to other African tech hubs with clearer regulatory frameworks.
Vague Definition Creates Unmanageable Business Liability
The hate speech definition uses subjective terms ("negative feelings," "hostility," "attitudes") and eliminates intent requirements, making it impossible for businesses to predict what commercial speech, advertising, or entertainment content will trigger liability. The provision explicitly captures factual statements and satirical content (movies, songs, parodies), meaning truthfulness provides no defense. With "other identity factor" undefined, businesses cannot identify which groups are protected, and the dignity-based standard varies by community. This creates unmanageable legal risk for media companies, advertisers, content creators, and any business engaging in public communications—forcing over-censorship and stifling commercial innovation.
Pre-Publication Compliance Burden for Platforms
This provision requires digital platforms to apply Section 18's communication requirements before publishing any content that could constitute hate speech—a category that explicitly includes entertainment, satire, parody, and factual statements under 37. Combined with 39's liability framework, platforms become responsible for user-generated content they can "substantially dictate" or remove, effectively requiring pre-publication review systems for all potentially controversial content. This creates significant technical complexity and market entry barriers, particularly for startups lacking resources to implement comprehensive content screening systems, while discouraging innovation in content moderation tools.
Prior Restraint on Satire
This provision subjects satire, parody, and entertainment content to pre-publication communication requirements by incorporating Section 18 into the hate speech framework. Since the hate speech definition in 37 explicitly includes "entertainment in a movie, song, parody, skit or as a satire" that "promotes negative feelings" or "stigmatises" groups, artists and comedians must satisfy undefined communication requirements before publishing creative work. This creates a prior restraint mechanism on political satire and social commentary, with liability extending to editors and producers under the broad "control" definition in 39.
Hate Speech Compliance Costs
This provision requires businesses to apply Section 18's communication requirements to hate speech before publication, but the preceding provision defines hate speech so broadly—including content that "promotes negative feelings" or "stigmatises" groups, plus entertainment and satire—that media organizations and platforms face substantial compliance costs determining what content requires pre-publication verification. The following provision then exposes editors, content managers, and platform moderators to personal liability for hate speech they can "substantially dictate" or remove, creating legal uncertainty that forces businesses to adopt overly cautious content policies and implement expensive compliance systems.
Criminal Liability for Platform Moderation
This provision criminalizes hate speech that incites genocide or aggravated violence, but when combined with the broad "control over communication" standard in 39, it creates criminal liability exposure for platform moderators, editors, and content curators. Anyone who "substantially dictates how content should be framed" or can "communicate or remove content" may face criminal penalties for hate speech they did not originate. The provision is also incompletely drafted—it references "section []" without specifying penalties and cuts off the definition of "aggravated violence" mid-sentence—creating legal uncertainty about what conduct is prohibited and what penalties apply. This chills legitimate platform moderation and editorial judgment, forcing businesses to choose between over-moderation or criminal liability risk.
Incomplete Criminal Hate Speech Provision
This provision criminalizes hate speech that incites genocide or aggravated violence, but suffers from critical drafting defects that violate legal certainty. The sanction reference is incomplete ("section [] of this Act"), and the definition of "aggravated violence" cuts off mid-sentence ("motivated by"). When combined with the broad "control over communication" standard in 39—which extends liability to anyone who can "substantially dictate how content should be framed" or "remove content without recourse to the original author"—this creates criminal liability for editors, platform moderators, and publishers based on editorial judgment. The following provision grants the Division (headed by a President-appointed Director) initial adjudicatory power to evaluate whether speech incites violence using subjective criteria like "tone" and "purpose" (41). While criminalizing incitement to violence aligns with international standards, the incomplete drafting and broad secondary liability create legal uncertainty that chills legitimate editorial judgment and platform moderation.
Criminal Liability for Editorial Control
This provision criminalizes hate speech that incites genocide or aggravated violence, but when combined with the broad "control over communication" standard in 39, it exposes platform moderators, editors, and publishers to criminal penalties (up to 500 penalty units and one month imprisonment) for exercising editorial judgment over content. Anyone who "substantially dictates how content should be framed" or can "communicate or remove content" has "control" and faces potential criminal liability. The provision is also incomplete—it references "section []" without specifying penalties, and the definition of "aggravated violence" is cut off mid-sentence—creating legal uncertainty that makes compliance planning impossible and forces businesses to choose between over-moderation, market exit, or abandoning editorial control entirely.
Universal Liability Without Speaker Protections
This provision establishes blanket liability for hate speech and indecent expressions that applies equally to all persons—journalists, academics, activists, and ordinary citizens—without distinguishing between professional contexts, speaker roles, or levels of culpability. Combined with 42's low threshold for indecent expressions (statements that "may reasonably provoke violence"), this creates severe chilling effects on legitimate speech. While 44 references professional guidelines as "instructive," these are non-binding and do not establish clear defenses or safe harbors for journalists following ethical standards or academics engaged in scholarly debate.
Vague "Private Facts" Standard Chills Journalism
This provision prohibits disclosure of "private facts" using highly subjective standards—information must be "offensive, repulsive, embarrassing or shameful to a reasonable person"—creating legal uncertainty that will chill investigative journalism and whistleblowing. The prohibition extends to "commentary about private facts, opinions about private facts, innuendos and insinuations," capturing analytical journalism and opinion pieces. While a public interest defense exists, speakers must navigate unclear balancing tests and face liability for disclosing "partly private facts which were not necessary"—forcing journalists to self-censor rather than risk administrative proceedings and potential criminal penalties. The provision particularly protects government officials' private facts that might "adversely affect public interest" or "public trust," language broad enough to shield legitimate exposés of official misconduct.
Vague "Private Facts" Definition Chills Journalism
This provision defines "private facts" using subjective standards like "intimate detail" and "expected to be kept private" without objective criteria, creating legal uncertainty for journalists reporting on public figures. While it carves out public records and crime information, it broadly protects personal finances and relationships even when relevant to public accountability, placing the burden on publishers to prove exceptions apply. Combined with 47(3) (which makes prior circulation "immaterial"), this lacks the explicit journalism exemptions found in GDPR Article 85 and Commonwealth frameworks, forcing publishers to make risky subjective judgments about what details are "necessary in the public interest" under 45(7).
Unworkable Privacy Framework Stifles Digital Platforms
Section 47 creates an unworkable compliance framework for digital platforms and content creators by establishing that republishing information is a violation even if it was already publicly available (47.3's "immateriality" clause). This eliminates standard defenses used by news aggregators, social media platforms, and content curation services. Combined with 48's prohibition on using private facts in entertainment, this severely restricts digital creative content including satire, parody, and commentary—key drivers of Ghana's digital creative economy. The provision provides no practical mechanism for platforms to verify consent status (which can be revoked at any time), creating impossible compliance burdens for user-generated content services and making Ghana's framework incompatible with international digital business practices.
Perpetual Consent Requirement Eliminates Public Domain Defense
Section 47.3 states it is "immaterial" whether information was already publicly known or resulted from the subject's own conduct—meaning even republishing already-public information violates the law without ongoing consent that can be revoked at any time. This eliminates the standard "public domain" defense found in democratic jurisdictions and creates severe restrictions on journalism, commentary, and public discourse. Combined with 48's prohibition on using private facts in satire, parody, or entertainment, this framework prevents legitimate discussion of public figures and matters of public interest using information that is already widely known.
Republication Liability Undermines Media Business Models
Section 47.3 states it is "immaterial" whether information was already publicly known or resulted from the subject's own conduct, meaning businesses face liability for republishing publicly available information without ongoing consent. This creates impossible compliance burdens for news aggregators, content curators, archives, and research firms that rely on publicly available information. Combined with the perpetual consent revocation right, businesses cannot maintain historical content or databases without continuous consent verification—making standard media and research business models legally unviable and placing Ghanaian businesses at a structural competitive disadvantage against international platforms not subject to these restrictions.
Absolute Ban on Satirical Commentary
This provision categorically prohibits the use of private facts in parody, skit, or satire in mass media, with no exception for public interest. Satire and parody are core forms of political commentary protected in democratic societies—essential tools for criticizing public figures and holding government accountable. By banning these forms entirely, even when they serve legitimate public interest purposes, the provision creates a severe chilling effect on political speech. The undefined term "entertainment" creates uncertainty about whether any humorous political commentary is permissible, encouraging self-censorship by media outlets and content creators.
Public Officials' Privacy Shield
This provision establishes liability for publishing private facts about government officials, politicians, and public officers using the same standard as for private individuals, without recognizing that public figures have reduced privacy expectations on matters relevant to their public roles. The vague carve-out (disclosure must "adversely affect national security, public interest, public trust, public safety or public order") creates uncertainty about what constitutes legitimate investigative journalism. Combined with 50's provision allowing the Division to initiate complaints on behalf of officials, this enables government suppression of accountability reporting about officials' conflicts of interest, health conditions affecting job performance, or financial relationships—even when such disclosures serve legitimate public purposes.
Undefined Standing Creates Platform Liability Uncertainty
The provision allows "individuals affected by the publication" to claim disclosure of private facts without defining who qualifies as "affected," creating unpredictable liability for platforms hosting content. More concerning, the Division can initiate complaints "on behalf of aggrieved persons" without requiring those persons to come forward, enabling government-directed enforcement against platforms. The provision also allows estates of deceased persons to pursue claims indefinitely, creating perpetual liability for historical content, biographies, and archives—forcing platforms to monitor and potentially remove content about deceased individuals without temporal limits.
Division Can Sue Without Complainants
The provision allows the Division to initiate private facts claims "on behalf of aggrieved persons" without requiring those persons to come forward or consent, enabling government-directed enforcement against journalists and critics. The undefined standard for who is "affected by the publication" creates uncertainty about liability exposure. Additionally, estates can pursue claims indefinitely for deceased persons' private facts without temporal limits, restricting historical narratives and biographies of public figures long after death—a significant departure from democratic norms where privacy rights diminish upon death.
Perpetual Liability Threatens Historical Content
This provision allows estates of deceased persons to pursue claims indefinitely for disclosure of private facts, creating perpetual liability for businesses publishing biographical, historical, or archival content. Combined with the undefined "individuals affected" standard, businesses cannot determine when liability exposure ends or who might bring claims. This forces media companies, publishers, and digital platforms to either avoid historical content entirely or maintain indefinite legal reserves, particularly impacting documentary producers, biographers, and archival institutions.
Overly Broad Disclosure Definition Stifles Platforms
This provision defines "public disclosure" so broadly that information becomes actionable the moment it "becomes known by one or more persons"—creating severe compliance uncertainty for digital platforms, content-sharing services, and tech startups. Combined with the preceding provisions' expansive categories of "protected confidential information" (52, 53)—including Cabinet communications, economic data, and closed-door proceedings—platforms cannot determine what user-generated content might trigger enforcement. The provision lacks critical safeguards: no distinction between active publication and passive hosting, no threshold requirement (e.g., "widespread" disclosure), and no consideration of technical feasibility for content monitoring. This forces platforms to either implement expensive pre-publication screening systems (often technically impossible for encrypted services) or accept significant legal risk, creating structural barriers to entry for information-sharing platforms and chilling innovation in investigative journalism tools and whistleblower platforms.
One-Person Disclosure Triggers Enforcement
This provision defines disclosure as "public" when information becomes known by one or more persons—meaning even private communications with a single individual (a source, editor, or legal counsel) could trigger enforcement action under the Act. Combined with the preceding provisions' expansive categories of "protected confidential information" (52, 53)—including Cabinet communications, economic data, and closed-door proceedings—this creates a framework where journalists and whistleblowers face enforcement risk for virtually any disclosure of government information. The provision lacks safeguards for communications with editors, fact-checkers, legal counsel, or oversight bodies, and does not require that the discloser knew the information was protected, creating a chilling effect on investigative journalism and government accountability.
Vague Disclosure Standard Creates Business Liability
This provision defines information as "public" when it is "published by whatever means" and "becomes known by one or more persons"—an extremely low threshold that triggers enforcement under the Act. Combined with the preceding provisions' broad categories of protected confidential information (52, 53), this creates severe compliance uncertainty for businesses. Media organizations reporting on government economic plans, platforms hosting discussions about Cabinet decisions, or startups analyzing government data all face potential enforcement action. The provision does not distinguish between intentional publication vs. accidental leaks, public interest disclosures vs. violations, or commercial speech vs. protected reporting, creating a compliance minefield that deters investment in information-related sectors and enables strategic litigation against businesses.
Low-Barrier Complaint Mechanism Enables Speech Challenges
The provision allows any person to file complaints against speech with a minimal merit threshold—merely "an allegation of fact" suffices to trigger administrative proceedings. The Division must determine merit within 2 working days based solely on whether the complaint alleges noncompliance, without assessing whether the speech is actually unlawful or whether constitutional protections apply. Anonymous reports can enter the system at the Division's discretion. This creates a low-barrier mechanism for challenging speech through administrative proceedings that can lead to correction orders, financial penalties, license suspension, and criminal prosecution, enabling coordinated complaint campaigns against critics, opposition voices, and investigative journalists.
Two-Day Response Window Burdens Businesses
Respondents to Division complaints must respond within 2 working days or face adjudication based solely on the complainant's case (default judgment). This timeline is substantially below international standards (typically 14-30 days) and creates severe operational burdens for startups, small media organizations, and individual content creators who lack dedicated legal teams. Missing the deadline triggers default judgment, which combined with the Division's power to recommend license suspension after three warnings (71), creates business continuity risks that disproportionately affect smaller digital businesses and innovation entrants.
Default Judgment Silences Speech Defenses
Respondents have only 2 working days to respond to complaints about their speech, or face adjudication based solely on the complainant's case (56(2), 56(3)). This compressed timeline is grossly inadequate for asserting speech-protective defenses like public interest disclosure, opinion/commentary exclusions, or correction/retraction protections—particularly for individual citizens, journalists, and small content creators lacking legal resources. The default judgment mechanism means a speaker's side may never be heard if they miss the deadline, creating a severe chilling effect on protected speech. This falls far below international standards (typically 14-21 days minimum) and disproportionately burdens those engaged in investigative journalism or public interest reporting.
Rapid Response Deadline Disadvantages Small Businesses
Businesses must respond to Division complaints within 2 working days or face default judgment based solely on the complainant's case. This compressed timeline—substantially below international standards of 14-30 days—creates disproportionate burdens on smaller media organizations, startups, and content creators lacking dedicated legal teams. Missing the deadline triggers adverse findings that can lead to license suspension or revocation (71), creating existential business risk. The provision forces businesses to maintain costly rapid-response capabilities, creating a competitive advantage for larger organizations with legal infrastructure while deterring market entry and innovation.
Executive Adjudication Creates Regulatory Uncertainty
The Division—an executive body whose Director is appointed by the President (14)—has initial adjudicatory power over most compliance disputes, including enforcement of annual audit requirements, fact-checking obligations, and content moderation standards. Platforms and media outlets must respond within 2 working days and face decisions within 5 working days on complex technical matters, with judicial review available only after exhausting administrative proceedings (60). The vague "significant public traction" standard for determining which cases reach court creates unpredictability about enforcement, while the asymmetrical exclusion of government allegations from Division jurisdiction creates an uneven regulatory playing field that particularly burdens startups and smaller digital businesses lacking resources to navigate rapid executive adjudication.
Executive Body Adjudicates Speech Disputes
This provision grants an executive-appointed Division initial adjudicatory power over most speech-related complaints, requiring individuals to exhaust administrative remedies before accessing courts (60). The Division determines whether speech constitutes misinformation, disinformation, hate speech, or private facts disclosure—all vague categories requiring constitutional analysis—yet operates under the National Communications Authority with a President-appointed Director (14). While certain matters are excluded (government allegations, criminal cases, monetary damages), the "public traction" standard for court referral is undefined, allowing the Division to retain jurisdiction over controversial speech most in need of judicial protection. This structure violates separation of powers by placing speech adjudication in executive hands, creating chilling effects as speakers face a non-independent tribunal before reaching courts.
Forced Administrative Exhaustion Delays Judicial Review
Businesses must exhaust proceedings before the executive-appointed Division before accessing courts (57(6)), with 2-working-day response deadlines (56(2)) and 5-working-day decision targets (58(5)). This creates regulatory uncertainty as compliance disputes are initially adjudicated by a non-independent body using vague standards like "significant public traction" to determine court referral (57(4)). The asymmetrical framework—where government allegations bypass Division jurisdiction (57(2)(c)) while businesses face mandatory administrative proceedings—creates unequal competitive conditions and delays access to impartial judicial review for potentially business-destroying decisions.
Vague Standards Enable Speech Suppression
The Division determines speech restrictions using the undefined standard of what is "just and right" (58(2)), with expedited 5-working-day decisions in "exceptional cases" (58(5)). This vague standard combined with compressed timelines creates systematic risk of erroneous speech suppression—particularly problematic given the Division's Director is presidentially appointed and decisions are binding with criminal enforcement (59). The provision lacks explicit procedural protections essential for speech cases: no burden of proof standards favoring speakers, no evidentiary hearing requirements, no mandate for reasoned written decisions, and no consideration of less restrictive alternatives.
Immediate Enforcement Before Appeal
The Division can enforce its own decisions through "such orders and directions as may be necessary" with parties facing "administrative and criminal penalties" for non-compliance (59.2-59.3). The provision appears to require immediate compliance before appeal rights are exhausted, meaning platforms could face license suspension, content removal orders, or other enforcement actions before judicial review. This creates severe operational uncertainty for digital businesses—particularly startups and smaller platforms—who must comply with potentially erroneous Division decisions immediately or face penalties, rather than having enforcement stayed pending appeal as is standard in democratic jurisdictions.
Censorship Before Judicial Review
Division decisions ordering content removal or corrections must be complied with immediately, with "administrative and criminal penalties" threatened for non-compliance (59.2). While appeal rights exist, the provision does not explicitly protect speech pending judicial review. This means the Division—whose Director is appointed by the President—can censor potentially protected speech (political commentary, investigative journalism, public interest disclosures) with judicial correction only available after the censorship has occurred. This inverts democratic norms where speech remains available until a court determines it unlawful, creating a prior restraint effect that particularly harms time-sensitive speech and encourages self-censorship even among speakers with strong legal defenses.
Enforcement Without Judicial Safeguards
The Division can enforce its own decisions immediately through "such orders and directions as may be necessary" (59.3), with parties facing "administrative and criminal penalties" for non-compliance (59.2) before appeal rights are exhausted. This creates severe business uncertainty as license suspensions, content restrictions, or financial penalties could be enforced immediately, causing irreversible operational and reputational harm even if later overturned on appeal. The provision concentrates investigative, adjudicatory, and enforcement powers in the same body without explicit protection for businesses during the appeal period.
No Business Protection During Appeals
The appeal framework provides inadequate protection for digital businesses facing Division enforcement actions. While judicial review is available, Division decisions are immediately binding and enforceable (59), meaning license suspensions or revocations can destroy a media business before appeals are heard. The restrictive appeal grounds exclude proportionality review and errors of law, and there is no automatic stay of coercive actions pending appeal (except for technical impossibility). For startups and small media outlets with limited legal resources, the 30-day appeal deadline and lack of interim relief create a "comply or die" dynamic that undermines business viability and chills market entry.
No Constitutional Speech Review Ground
The provision lists only four narrow grounds for courts to overturn Division decisions (60(5)), but omits explicit constitutional review for freedom of speech violations. While ground (c) permits setting aside decisions if "the communication or information was permissible under the Act," it's unclear whether this encompasses constitutional speech protections or only statutory interpretation. This ambiguity is critical because the Division—whose Director is appointed by the President (14)—has initial adjudicatory power over speech using broad, discretionary definitions of prohibited content. Without an explicit constitutional review ground, speakers cannot be confident courts will rigorously scrutinize whether Division sanctions violated their constitutional rights to freedom of expression and press freedom, creating a chilling effect on protected speech.
Narrow Appeal Grounds Threaten Businesses
The provision limits High Court review to four narrow grounds, excluding challenges based on proportionality (whether penalties fit the violation) or procedural fairness (whether the Division followed fair processes). Combined with 59's immediate enforcement requirement, businesses facing license suspension or revocation cannot effectively challenge these decisions before their operations are destroyed. A media outlet or platform suspended during the appeal period—which may take months or years—faces business collapse, employee loss, and market exit before judicial review occurs, making appeals practically meaningless even if ultimately successful.
Sanctions Stacking Threatens Platform Viability
The provision permits imposing multiple simultaneous sanctions (correction orders, content removal, access blocking, license suspension, administrative penalties, criminal penalties) without clear proportionality limits or graduated penalty structure. Platforms could face existential sanctions like license revocation or access blocking combined with other penalties for a single violation, even after voluntarily removing content. When combined with 63's strict liability standard (sanctions "even if the person does not know" information is false), this creates unpredictable liability exposure that will drive excessive compliance costs, over-moderation, and market exit by smaller platforms and startups.
Sanctions Stacking Enables Speech Suppression
This provision permits the Division to impose multiple simultaneous sanctions—including correction orders, content removal, access blocking, license suspension, administrative penalties, and criminal penalties—with only vague "necessary and proportionate" language providing no objective criteria. Combined with 63, which permits sanctions "even if the person does not know or has no reason to believe that the information is false," this creates liability without fault. The framework eliminates incentives for voluntary correction by permitting sanctions even after removal/retraction, and concentrates enforcement power in a non-judicial Division with a President-appointed Director who acts as both investigator and adjudicator. This creates severe chilling effect as speakers face arbitrary stacking of severe penalties for innocent mistakes or good-faith errors.
License-Based Sanctions Threaten Business Continuity
This provision authorizes license suspension or revocation as a sanction for content violations, creating existential threats to media outlets and platforms. Combined with the authority to impose multiple simultaneous sanctions without clear proportionality guidelines, businesses face unpredictable liability exposure that makes operational planning impossible. The provision also eliminates incentives for voluntary compliance by permitting sanctions even after content removal or retraction, contradicting standard regulatory practice that rewards self-correction to reduce enforcement costs and encourage responsible business conduct.
Strict Liability Correction Orders
The provision permits Correction Directions to be issued even when the person "does not know or has no reason to believe that the information is false" (Section 63(4)), creating strict liability for speech. Digital platforms, media outlets, and content creators can be compelled to publish corrections in specified forms and locations—bearing all compliance costs (Section 63(5))—without any knowledge requirement. This creates unpredictable legal exposure that cannot be mitigated through good-faith compliance, particularly problematic for startups and SMEs facing the bill's other compliance burdens (annual audits, risk assessments, fact-checking departments). The vague standard for "extreme consequences" (Section 63(3)) triggering newspaper publication requirements adds further uncertainty, discouraging investment in Ghana's digital economy.
Unpredictable Correction Costs
Businesses must bear all costs of complying with Correction Directions, including publication in newspapers and communication to specified audiences, even when they had no reason to believe information was false (Section 63(4)-(5)). This creates uninsurable financial liability that cannot be managed through due diligence, with costs falling disproportionately on smaller businesses. Combined with potential monetary damages (Section 63(6)) and the three-warning license suspension system (71), this creates cascading financial exposure that disrupts operations and distorts market competition.
Undefined "Substantially Similar" Standard Paralyzes Platforms
Section 64 empowers the Court or Division to order cessation of not just specific content, but also "any statement or material that is substantially similar" to alleged misinformation, disinformation, or hate speech. This undefined standard creates impossible compliance challenges for digital platforms, content aggregators, and automated systems—they cannot predict what content will be deemed "substantially similar" and thus face unpredictable legal exposure. Combined with strict liability (orders can issue "even if the person does not know or has no reason to believe that the information is false"), this provision makes it impossible for digital businesses to assess and manage legal risk through due diligence, fundamentally undermining investment certainty in Ghana's digital economy.
Strict Liability Chills Business Communications
Section 64(6) permits Stop Communication Directions to be issued against businesses "even if the person does not know or has no reason to believe that the information is false"—creating a strict liability regime for business communications. Combined with the undefined "substantially similar" standard in s.64(3), companies cannot predict what statements might trigger enforcement, forcing them to self-censor marketing, public relations, and stakeholder communications. Businesses must bear compliance costs for correction notices and newspaper publications (s.64(5)), creating financial barriers particularly burdensome for smaller enterprises and startups lacking legal resources to navigate this uncertainty.
Strict Liability Content Removal Chills Innovation
This provision allows removal orders without requiring knowledge that content is false, creating strict liability for digital publishers and content creators. The requirement to remove "substantially similar" material—an undefined standard—forces platforms and startups to over-remove content to avoid liability, directly chilling innovation in content creation and distribution technologies. While third-party intermediaries cannot be compelled to remove content (only requested per their policies), individual publishers, news aggregators, and digital businesses face removal orders plus the full costs of compliance, creating prohibitive barriers for startups and small digital enterprises operating in Ghana's online ecosystem.
Vague Removal Standards Create Business Liability
The provision permits removal orders based on "substantially similar" content without defining what constitutes similarity, forcing businesses to make subjective compliance judgments that create unpredictable legal risk. Combined with strict liability (removal orders issued "provided there is evidence" without requiring knowledge the content was false), businesses face removal orders and mandatory compliance costs for content published in good faith. The cumulative effect—removal plus mandatory corrections plus potential newspaper publication—creates substantial, unpredictable operational costs that discourage content creation and online business investment.
Comply Before Appeal Requirement
Subsection (3)(b) eliminates appeal as a defense to non-compliance, requiring speakers to comply with Directions restricting their speech before they can challenge those Directions in court. This reverses normal due process protections where coercive action against speech is stayed pending judicial review. A speaker who believes a Direction unlawfully restricts protected speech must still comply (or face escalating penalties including financial sanctions and account removal) before obtaining judicial determination of the Direction's lawfulness. This creates a prior restraint dynamic where the Division—an executive body appointed by the President (14)—can effectively suppress speech before courts review legality, chilling legitimate expression.
License Revocation for Administrative Non-Compliance
Licensed media entities face license suspension or revocation for failing to comply with administrative Directions after three warnings (68), even while appealing those Directions. This creates an existential business threat—license revocation effectively terminates media operations—for what may be administrative disputes rather than substantive violations. The provision also imposes 1,000 penalty units plus 100 penalty units per day of continued non-compliance, creating accumulating financial exposure that could be ruinous for smaller outlets. This enforcement mechanism concentrates significant power in the Division (an executive body) to effectively shut down media businesses through administrative proceedings.
Account Removal Without Judicial Review
The Division can request removal of online accounts after three compliance warnings, without explicit requirements for judicial review, notice to the account holder, or hearing rights before the request is issued. While subsection (4) protects politicians and "known public or social commentators," ordinary citizens lack the same protections, creating a two-tier speech system. The provision concentrates power in the executive-appointed Division to silence critics through administrative proceedings, with judicial review available only after the fact. When combined with 68's elimination of defenses and 70's vague standards for access blocking, this creates a coercive pathway to suppress dissent.
Website Blocking on Vague Criteria
This provision allows the Division to order ISPs to block access to entire websites based on vague criteria like content "prejudicial to friendly relations" or that "unjustifiably projects the Republic as a defaulter of international law." Unlike content removal, access blocking prevents ALL Ghanaian users from reaching a platform, creating existential risk for digital businesses. The undefined standards provide no clear compliance guidance, deterring digital investment and innovation. ISPs face license revocation for non-compliance, transforming them into de facto censors and undermining the open internet infrastructure necessary for Ghana's digital economy.
Site-Wide Blocking on Diplomatic Grounds
This provision allows the Division or Court to order ISPs to block access to entire websites based on vague criteria like content "prejudicial to friendly relations" between Ghana and other countries. Unlike targeted content removal, access blocking prevents all Ghanaian users from accessing an entire online location—affecting innocent users and legitimate speech on the same platform. The ill-defined triggers ("prejudicial," "unjustifiably") give the President-appointed Division broad discretion to silence criticism of Ghana's international conduct, creating a severe chilling effect on speech about foreign policy, diplomatic relations, and international law compliance.
ISP License Revocation for Blocking Non-Compliance
This provision empowers the Division to order internet service providers to block entire online locations based on vague criteria like content "prejudicial to friendly relations" (70.1(c)), with license suspension or revocation as the penalty for non-compliance after three warnings (70.4). This creates existential risk for ISPs—essential infrastructure providers whose licenses are their entire business—forcing them to over-comply and block legitimate business content rather than risk losing their operating authority. The blanket blocking mechanism affects entire platforms rather than specific content, meaning a single allegedly problematic post could shut down access to an entire business platform for all Ghanaian users. Combined with the Division's executive control (Director appointed by President per 14) and vague substantive criteria, this creates a hostile, unpredictable environment for digital businesses and infrastructure providers.
Undefined Damage Liability Deters Innovation
Section 71(5) grants the Minister unconstrained authority to prescribe the "scope, extent and range of monetary damages" without defined criteria, parliamentary oversight, or damage caps. Combined with punitive damages (71(4)(c)) lacking clear standards, this creates unpredictable financial liability that digital startups and online platforms cannot accurately calculate or insure against. When layered with administrative penalties (69) and license suspension threats (71), this cumulative enforcement regime creates existential risk for smaller digital innovators with limited capital reserves, effectively raising barriers to market entry in Ghana's digital economy.
Uncapped Damages Chill Political Speech
The Minister can unilaterally set damage ranges for speech violations without parliamentary oversight or defined criteria (71(5)), while courts may impose punitive damages without clear standards for publishing "false or inaccurate election information" or "confidential information concerning the Republic" (71(3)-(4)). Combined with administrative penalties from 69 and license revocation from 71, this creates cascading financial liability that deters investigative journalism and political commentary. Democratic systems typically require legislatively defined damage caps and limit punitive damages to malicious conduct—this provision lacks both safeguards.
Ministerial Damage-Setting Powers Create Business Uncertainty
Section 71(5) grants the Minister unconstrained authority to prescribe the "scope, extent and range of monetary damages" without parliamentary oversight, defined criteria, or transparency requirements. This prevents businesses from accurately calculating compliance costs, obtaining insurance coverage, or conducting risk assessments. Combined with authorization of punitive damages without clear standards (Section 71(4)(c)) and cumulative enforcement through administrative penalties (69) and license suspension (71), this creates unpredictable financial liability that undermines business planning and investment decisions—particularly for smaller publishers and media outlets that cannot absorb undefined damage exposure.
Vague "Notorious Publisher" Standard Enables Arbitrary License Revocation
Section 72(1)(c) allows the Division to recommend license revocation after a publisher becomes "notorious for publishing false or other information" with only one compliance warning—bypassing the three-warning requirement in 72(1)(a). This undefined standard creates legal uncertainty that prevents digital businesses from understanding what conduct triggers existential risk. Combined with 73's warning-free Cease and Desist orders, this provision enables rapid escalation to business closure without clear criteria, proportionality analysis, or consideration of good-faith compliance efforts. The vague triggering standard deters investment in Ghanaian digital media, discourages startup entry, and creates regulatory capture risk where enforcement could selectively target government critics while protecting aligned publishers.
License Revocation Without Clear Standards
This provision allows the Division to recommend license suspension or revocation for publishers who become "notorious for publishing false or other information" after just one compliance warning (72(1)(c)). The term "notorious" is undefined and subjective—lacking criteria for how many publications, over what timeframe, or what types of violations trigger this status. This creates a shortcut around the graduated enforcement model requiring three warnings (72(1)(a)), enabling the Division to target publishers through vague standards. License revocation is an extreme prior restraint that eliminates a publisher's ability to operate, yet the provision lacks proportionality safeguards and doesn't reference defenses available elsewhere in the bill (quick correction, public interest). Combined with 73's Cease and Desist orders requiring no prior warning, this creates severe chilling effects on investigative journalism and government criticism.
License Revocation Creates Unmanageable Business Risk
This provision allows the Division to recommend license revocation—effectively forcing business closure—based on becoming "notorious for publishing false or other information" after just one compliance warning (72(1)(c)). Unlike standard regulatory violations with clear thresholds, businesses cannot assess or mitigate "notorious" status through compliance planning. This creates existential uncertainty that deters investment, disadvantages smaller media outlets lacking resources to contest subjective determinations, and concentrates market power among established players who can absorb regulatory risk. Combined with 73's warning-free Cease and Desist orders, a single contested violation could trigger the "notorious" pathway to business closure.
Immediate Business Shutdown Without Warning
The Division can issue Cease and Desist orders against any person "deemed to be engaged in the business of publication" based on vague criteria, with immediate administrative penalties for non-compliance without any compliance warning (Section 73(2)). This bypasses the graduated enforcement system established in 71 (three warnings before license action) and 73 (5-day response period), creating existential risk for digital businesses. The provision lacks procedural safeguards—no required notice, hearing, or appeal before penalties—and provides no timeframe for compliance or specification of prohibited conduct, making it impossible for digital platforms and publishers to predict or prevent enforcement actions.
Immediate Penalties Without Business Warning Period
The Division can issue Cease and Desist orders against businesses "deemed to be engaged" in publishing false or other information, with immediate administrative penalties for non-compliance without any Compliance Warning (Section 73(2)). This bypasses the graduated enforcement system that gives other violators three warnings before license suspension (71) and the normal 5-day compliance period (73). Businesses face immediate financial penalties without opportunity to correct course, creating operational uncertainty and investment risk in Ghana's media and digital sectors.
Escalating Penalties Threaten Digital Startups
The provision imposes 500 penalty units plus 100 units per day of continued non-compliance with Division orders, creating unlimited financial liability that accumulates indefinitely. For digital startups and small media organizations, this escalating penalty structure creates existential financial risk that forces compliance regardless of whether Division orders are legally sound. The provision provides no proportionality safeguards or consideration of a publisher's size or resources, and 76 creates a pathway where administrative non-compliance becomes criminal liability. This structure creates severe barriers to entry in Ghana's digital media sector and discourages innovation by making regulatory risk impossible to quantify.
Financial Coercion Enables Speech Suppression
This provision creates escalating administrative penalties (500 units + 100 units per day indefinitely) for non-compliance with Division orders, which can require removal of content deemed misinformation, disinformation, or confidential information. The cumulative daily penalties create overwhelming financial pressure to comply with orders before any independent judicial review, functioning as a de facto censorship mechanism. Critically, 76 transforms administrative non-compliance into criminal liability (200-500 penalty units + 1 month imprisonment) for private facts and confidential information violations, creating a dual-track enforcement system. The Division—headed by a Presidential appointee—both adjudicates violations and collects penalties, creating structural bias. This penalty structure enables suppression of protected speech, including government criticism and investigative journalism, by making non-compliance financially unsustainable regardless of whether the underlying speech is constitutionally protected.
Regulator Profits from Penalties It Imposes
The Division collects administrative penalties it issues, creating a financial incentive structure where the enforcer benefits from enforcement—penalties are paid "to the Division" rather than to a neutral treasury. This violates basic regulatory independence principles and creates perverse incentives for aggressive enforcement against businesses. Combined with undefined "Directions or Orders" that trigger penalties and 100 penalty units per day accumulation with no proportionality review, businesses face unlimited financial exposure from a regulator with financial motivation to maximize penalties. The pathway to criminal liability in 75 for non-compliance compounds this by threatening management with criminal records.
Criminal Liability for Causing "Anxiety"
This provision criminalizes speech causing "public harm," "fear," "unrest," or "public disturbance" with definitions so broad they encompass normal business communications. Tech companies face criminal penalties (up to 500 penalty units + 1 month imprisonment) for content that causes "anxiety about public policy changes" or "significant reputational damage" to government institutions—making it impossible to build compliant content moderation systems or engage in legitimate policy advocacy. When combined with 76, executives face criminal liability for failing to prevent employee or user speech they cannot reasonably monitor, creating severe operational risks and chilling effects on digital innovation.
Unpredictable Criminal Liability for Business Speech
This provision criminalizes speech causing "loss of funding" or "loss of human capital including strikes" (subsection 3), making routine business communications—such as reporting on government contracts, labor disputes, or policy impacts—potential criminal offenses. The "reasonable belief in falsity" standard means businesses face criminal penalties even when acting in good faith, while the vague definitions of "public harm" and "fear" make it impossible to predict what statements will trigger liability. Combined with 76's personal criminal liability for executives who fail to prevent employee speech, this creates an impossible compliance situation that deters legitimate business reporting and corporate communications.
Management Criminal Liability Undermines Safe Harbor
This provision creates criminal liability for tech company officers and managers who "ought reasonably to have known" about offences and failed to take "all reasonable steps" to prevent them. This directly contradicts the safe harbor protection in 77, which shields intermediaries from liability for third-party content. While platforms are protected from strict liability, their management faces imprisonment for failing to prevent user violations of vague offences like misinformation or hate speech. This forces companies to implement aggressive monitoring systems, creates barriers to entry for startups that cannot afford compliance infrastructure, and incentivizes over-moderation that stifles user expression and innovation.
Criminal Liability for Failing to Censor
This provision makes managers and officers criminally liable if they "ought reasonably to have known" about speech offences by employees or users and "failed to take all reasonable steps to prevent" them. This creates an affirmative duty to monitor and censor all employee and user speech to avoid criminal penalties including imprisonment. Combined with the Act's broad definitions of misinformation (false information regardless of intent), hate speech, and private facts, managers face criminal liability for failing to prevent speech that might violate subjective standards. This incentivizes aggressive pre-publication censorship and creates a powerful chilling effect on journalism, commentary, and public discourse—managers will err on the side of suppressing speech rather than risk criminal prosecution.
Criminal Liability Creates Compliance Trap
This provision imposes criminal penalties (fines and imprisonment) on corporate officers and managers who "ought reasonably to have known" about employee or user violations and "failed to take all reasonable steps to prevent" them. For businesses, this creates an affirmative duty to actively monitor and censor all content to avoid prosecution—but the underlying offences (misinformation, hate speech, private facts) have vague definitions that make compliance standards unclear. The result is a compliance trap: businesses must invest heavily in monitoring infrastructure or face criminal liability, but cannot know with certainty what conduct they must prevent. This creates barriers to market entry for startups and SMEs that cannot afford compliance costs, deters foreign investment due to unpredictable legal risk, and forces businesses to choose between over-censorship or criminal exposure for management.
Mandatory Annual Audits Burden Startups
This provision requires all internet intermediaries and media houses with online locations to conduct annual human rights due diligence audits of their algorithmic systems and content moderation practices, without differentiation by company size or capacity. Unlike comparable frameworks in OECD democracies (such as the EU's Digital Services Act), this applies uniformly to small startups and large platforms alike, creating substantial compliance costs that disproportionately burden smaller entities. The provision lacks clear standards for what constitutes adequate compliance, creating uncertainty, while penalties of 500 penalty units plus 100 per day of continued non-compliance create significant financial exposure that could be existential for startups and small platforms.
Undefined Audit Standards Create Financial Risk
This provision requires annual human rights due diligence audits but provides no standards for what constitutes adequate compliance, leaving companies unable to predict costs or assess compliance adequacy. The penalty structure—500 penalty units initially, then 100 per day of continued non-compliance—creates escalating financial exposure without clear guidance on how to satisfy the requirement. The Division has broad discretion to determine whether an audit meets the undefined "human rights due diligence" standard, making it impossible for businesses to plan compliance costs or assess their legal exposure. This lack of clarity particularly harms smaller entities that cannot afford extensive legal review or multiple audit iterations.
Pre-Publication Fact-Checking Kills Digital Platforms
This provision requires all internet intermediaries and content creators to fact-check information before publication and establishes mandatory fact-checking desks for platforms. For user-generated content platforms processing thousands or millions of posts daily, pre-publication verification is technically and economically impossible without either massive delays that destroy user experience or automated systems that will block legitimate speech. This directly contradicts the safe harbor protections in 77, which state intermediaries have no general monitoring obligation. Combined with the licensing prerequisite and training requirements, this creates insurmountable barriers for startups and smaller platforms while giving the Division discretionary power to exclude competitors through certification denial.
Certification Gatekeeping Controls Market Access
This provision makes fact-checking certification a prerequisite for license renewal (subsection 5), giving the Division discretionary control over which businesses can operate in Ghana's media and digital sectors. Combined with 83's requirement for two years of bi-annual training before license renewal, this creates compounding barriers to market entry and continuation. The provision provides no standards for what constitutes adequate fact-checking or how certification decisions will be made, creating regulatory uncertainty that particularly burdens smaller media organizations and startups lacking resources for compliance infrastructure. This licensing prerequisite transforms the Division into a market gatekeeper with undefined criteria for business operation.
License Renewal Tied to Training
Licensed media entities and intermediaries must provide bi-annual in-house training on misinformation and disinformation, with license renewal denied if two years of training are not completed. This creates substantial ongoing operational costs and administrative burden, particularly for smaller organizations. Combined with requirements for fact-checking departments (82), annual risk assessments (81), and algorithm audits, the cumulative compliance regime significantly increases the cost of doing business in Ghana's media sector and could force smaller players out of the market.
Vague Paid Content Compliance Burden
This provision requires digital advertising intermediaries, influencers, and content creators to take "reasonable steps" to ensure paid content complies with the entire Act—including provisions on misinformation, hate speech, and private facts—but provides no clear guidance on what constitutes "reasonable steps." The penalty structure (100 penalty units initially, plus 100 per day of continued default) creates severe financial exposure that accumulates rapidly, particularly problematic for startups and smaller platforms. This vague compliance obligation combined with escalating daily penalties creates significant barriers to market entry, discourages innovation in advertising technology and creator monetization, and could chill the digital creator economy by making brand partnerships and sponsored content legally risky.
Paid Content Vetting Chills Speech
This provision requires content creators, influencers, and advertising intermediaries to ensure paid content complies with the entire Act—including provisions with vague definitions of misinformation, hate speech, and private facts. The "reasonable steps" standard creates uncertainty about what paid content is permissible, forcing creators and intermediaries to self-censor commercial speech rather than risk daily penalties of 100 penalty units. This effectively creates a prior restraint on paid speech, with enforcement discretion vested in a President-appointed Division Director, raising concerns about selective enforcement against monetized criticism or opposition-supporting advertising.
Paid Content Liability Burdens Business
This provision requires digital advertising intermediaries, influencers, and content creators to ensure paid content complies with the entire Act, creating substantial operational costs for content vetting and legal review. The daily penalty structure (100 penalty units per day of non-compliance) creates severe financial exposure, particularly for platforms handling high volumes of paid content. These compliance burdens and financial risks create barriers to market entry for startups and smaller platforms, discourage creator monetization and brand partnerships, and could drive digital advertising activity away from Ghana's market entirely.
Key Provisions
Scope of the Act
Plain Language Summary
This section of the Act explains what topics the law addresses. Specifically, the law focuses on misinformation, which is false or inaccurate information, and disinformation, which is deliberately misleading or biased information. Therefore, the Act aims to regulate or address issues related to the spread of these types of information.
Show Original Legal Text
(1) The Act covers the following:
(a) Misinformation;
(b) Disinformation;
(c) Hate Speech;
(d) Public disclosure of private facts; and
(e) Publication of confidential information concerning the Republic.
Nothing in this Act shall preclude a person from enforcing existing common law remedies in respect of misinformation and disinformation even where both actions run concurrently.
Notwithstanding subsection (2), the Court, Division or other adjudicatory body shall take into consideration the relief sought, or the extent of a remedy granted or sanction imposed or satisfaction for breach offered to the aggrieved party in respect of the same facts forming the basis of misinformation and disinformation before another adjudicatory body.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Existing legislation
Plain Language Summary
This section states that the current law should be interpreted alongside several existing laws. These laws cover topics like cybersecurity, criminal activity, media oversight, data protection, and national security. This ensures that the new law works in harmony with the current legal framework.
Show Original Legal Text
(1) The Act shall be read together with the following enactments:
- (a) Cybersecurity Division Act, 2020 (Act 1038) (Cybersecurity Division Act), - (b) Criminal Offences Act, 1960 (Act 29) (Criminal Offences Act), - (c) National Media Commission Act, 1993 (Act 449) (National Media Commission Act), - (d) Data Protection Act, 2012 (Act 843), - (e) the National Communications Authority Act, 2008 (Act 769) (National Communications Division Act), - (f) the Electronic Communications Act, 2008 (Act 775) (Electronic Communications) Act), - (g) the State Secrets Act, 1962 (Act 101) (State Secrets Act) - (h) the Political Parties Act, 2000 (Act 574) (Political Parties Act) - (i) the Security and Intelligence Agencies Act, 2020 (Act 1030) (Securities and Intelligence Agencies Act) - (j) Presidential Office Act, 1993 (Act 463) (Presidential Office Act)
Principles on the Right to Freedom of Speech and Expression and the Right to Privacy
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Enforcement and interpretation of constitutional rights
Plain Language Summary
This provision ensures the Act is implemented and interpreted in a way that respects the constitutional rights to freedom of speech, expression, and privacy. It mandates that the Act's enforcement aligns with these rights as outlined in the Constitution. This means the Act cannot be applied in a way that violates these fundamental rights.
Show Original Legal Text
- The Act shall be enforced and interpreted in accordance with the right to freedom of speech and expression and the right to privacy under articles 21(1)(a) and 18(2) of the Constitution, respectively and Chapter 12 of the Constitution.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Balancing private benefit against public benefit
Plain Language Summary
This provision instructs courts to consider both the importance of free speech and privacy, and the need to protect people from harm caused by false information or hate speech. When deciding cases, courts must weigh the benefits of upholding free speech and privacy against the benefits of preventing harm to individuals and institutions. This ensures a balance between individual rights and public safety.
Show Original Legal Text
- In the application of the Act, the Court or Division (as the case may be) shall weigh the private benefit of enforcing the right to freedom of speech and expression and/or the right to privacy against the public benefit of protecting an individual, group of persons, private or public institution from the harm caused by the false information, hate speech or publication of other information.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Application and interpretation in favour of constitutional rights
Plain Language Summary
This provision directs the Court to prioritize freedom of speech, expression, and privacy when interpreting the Act, especially if the information serves a legitimate public benefit. Information is considered beneficial if it reveals criminal activity, government misconduct, criticizes the government, exposes civil wrongdoing, or concerns controversial public health opinions. This aims to ensure transparency and accountability by protecting the disclosure of information that is in the public interest.
Show Original Legal Text
(1) The Court or Division shall apply and/or interpret the Act in favour of the right to freedom of speech and expression and right to privacy where the information under consideration achieves a legitimate public benefit.
For the purpose of subsection (1), information is of legitimate public benefit if that information:
- (a) is intended to expose or exposes a person or institution's activities directly or indirectly related to, or connected with the commission, or the reasonable suspicion of commission of a crime under the laws of the Republic; - (b) is intended to expose or exposes a Government or public-related matter that is nationally dishonourable and inimical to values of probity and accountability; - (c) is intended to criticise or criticises the Government or public institution in relation to the management of a public office or the performance of official duties of a government official or public officer; - (d) is intended to expose or exposes civil wrong doing done by or against an individual, group of persons, private and public institutions or the Government; or - (e) relates to a controversial public health opinion that is capable of being proven.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Establishment of liability
Plain Language Summary
This section explains how someone can be held responsible for their actions under this law. Liability is established to protect individual rights, national security, and public well-being. It requires a balance between public benefit and private interests, a fair and transparent process, and adherence to due process. For hate speech, liability is determined according to the rules in Part IV of this law.
Show Original Legal Text
(a) liability promotes the rights and reputation of an individual, group of persons, private or public institution, and protects national security, public order, public safety, public health or public morals;
- (b) liability was reached upon ascertaining that the public benefit gained from culpability of a person for contravening the Act outweighs a private benefit, and there is no justification under section 7(2); and - (c) liability was determined by a fair and transparent criterion under the Act in accordance with due process.In addition to subsection (1), the establishment of liability for hate speech shall be in accordance with Part IV of the Act.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Imposition of sanctions and grant of remedies
Plain Language Summary
This section states that any punishment or solution for breaking the rules must be fair and appropriate for a democratic society. When deciding on a suitable action, the Court or Division must consider the harm caused, whether the action fixes the problem, and choose the option that is least restrictive. This ensures that penalties are not excessive and are tailored to the specific situation.
Show Original Legal Text
- (1) Where liability is established under this Act, the Court or the Division shall not impose a sanction or grant a remedy for a non-compliance or for breach of conduct under the Act unless the sanction or remedy is that which is necessary and proportionate in a democratic society.
In determining whether a sanction or remedy is necessary and proportionate, regardless of the sanction or remedy stated in the Complaint, the Court or the Division shall:
(a) justify the sanction or remedy against the evidence of the harm caused to an individual, group of persons or the public;
(b) determine whether the sanction or remedy is adequate under the circumstances to achieve the object and purpose of this Act; and
(c) apply the least intrusive means of restriction considering the circumstances, the rights involved and the desired result.
Institutional Framework
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Establishment of the Division on Misinformation, Disinformation, Hate Speech and Publication of Other Information.
Plain Language Summary
This provision creates a Division on Misinformation, Disinformation, Hate Speech and Publication of Other Information. This division will be responsible for enforcing the Act's regulations related to these issues. The division is established by the Board of the Authority under the National Communications Act.
Show Original Legal Text
- (1) For the purpose of enforcement and implementation of this Act, the Division on Misinformation, Disinformation, Hate Speech and Publication of Other Information is hereby constituted by the authority of the Board of the Authority, pursuant to section 15 of the National Communications Act.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Powers of the Division
Related Key Concerns
Plain Language Summary
This section details what the Division is allowed to do. It can create its own rules and make legally binding decisions regarding complaints. However, the Division is not a corporation and must act through the Authority for lawsuits. It also cannot independently own property or enter into contracts.
Show Original Legal Text
- (1) The Division shall have power to publish its own internal rules to streamline its functions under this Act.
(2) The Division shall have power to make findings of fact, establish liability and render binding decisions on sanctions and remedies in respect of Complaints or Reports under the Act.
(3) The Division is not a body corporate with perpetual succession or a common seal and shall only act through the Authority for the purpose of suing or being used.
(4) The Division may not, for the performance of its functions, acquire and hold movable and immovable property and enter into a contract or any other transaction in its own name.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Functions of the Division
Related Key Concerns
Plain Language Summary
This section details the responsibilities of the Division, which include monitoring compliance with the Act and promoting freedom of speech by increasing transparency and educating the public. The Division will also investigate complaints, determine liability, and impose penalties for violations of the Act. Furthermore, it will create a national plan to implement the Act and advise the Minister on relevant policy issues.
Show Original Legal Text
(1) The Division shall:
- (a) ensure and monitor compliance with this Act; - (b) promote the right to freedom of speech and expression by ensuring functional transparency, fostering media literacy, providing verified information to the public and undertaking educational programmes and initiatives on false and other information; - (c) sensitise the public on the object and purpose of the Act, and the rights, sanctions, remedies and defences under the Act; - (d) receive and investigate Complaints or Reports of non-compliance with this Act, breach of conduct under the Act and make appropriate binding decisions accordance with the Act; - (e) establish liability and impose sanctions or remedies that are necessary and appropriate in a democratic society; - (f) implement the requirements for all relevant stakeholders under this Act; - (h) develop a national plan of action to address, monitor and report on the progress of implementation of this Act and through the Minister to be submitted to Parliament; and - (i) advise the Minister on policy matters and any other matter relevant to the implementation of this Act.- The Division shall submit an annual report on false and other information under the Act to the Minister.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Collaboration
Related Key Concerns
Plain Language Summary
This provision mandates that the Division collaborate with entities like the National Media Commission. However, it also states that the Division's responsibilities will take priority when there are similar duties between the Division and another public institution, especially concerning the implementation of this law. This ensures the Division's authority in carrying out its specific functions under the Act.
Show Original Legal Text
12 . (1) The Division shall work closely with the National Media Commission and other public institutions in the execution of its functions.
In the event of parallel roles between the Division and any other public institution, the functions of the Division shall prevail for the purpose of implementing this Act.
Administration of the Divisions
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Subdivisions
Related Key Concerns
Plain Language Summary
This section establishes two main parts within the Division: one for handling complaints and investigations related to violations of the Act, and another for educating the public about the Act and providing accurate information. The Complaints and Investigation Subdivision will address reports of non-compliance and make binding decisions. The Public Information Desk and Outreach Subdivision will focus on informing the public, countering misinformation, and promoting freedom of speech through educational programs.
Show Original Legal Text
(1) The Division shall comprise of the following operational subdivisions:
- (a) The Complaints and Investigation Subdivision - (b) The Public Information Desk and Outreach SubdivisionThe Complaints and Investigation Subdivision shall be responsible for receiving and investigating Complaints and Reports of non-compliance or breach of conduct under the Act either by the instigation of a third-party or on its own accord, and making appropriate binding decisions.
The Public Information Desk and Outreach Subdivision shall be responsible for sensitising the public on the Act; providing information on request to the public ; providing verified information to counter false information; and promoting the right to freedom of speech and expression through educational programmes and initiatives on false and other information.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Director of the Division
Related Key Concerns
Plain Language Summary
This provision establishes the position of Director of the Division, who is appointed by the President. To be eligible, a candidate must have at least 10 years of experience in fields like telecommunications, law, or human rights. They also cannot have any direct involvement or financial interest in broadcasting networks, media houses, or internet intermediaries operating within the country.
Show Original Legal Text
(1) The Division shall be headed by a Director appointed by the President in accordance with article 195 of the Constitution.
A person shall not be qualified for appointment as a Director of the Division unless that person:
- (a) has not less than 10 years of working experience in (i) telecommunications, (ii) law, (iii) regulatory compliance, (iv) education (v) information studies or (vi) human rights and - (b) is not directly or indirectly involved in the management of; or - (c) has no financial or commercial interest in a broadcasting network or media house or internet intermediary operating within the territory of the Republic.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Appointment of other staff
Plain Language Summary
This provision grants the President the authority to appoint additional staff to the Division. These appointments must align with constitutional guidelines. The purpose is to provide the Division with the necessary personnel to properly and effectively carry out its responsibilities.
Show Original Legal Text
- The President shall in accordance with article 195 of the Constitution, appoint for the Division, other officers and staff that are necessary for the proper and effective performance of its functions.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Ministerial Directive
Plain Language Summary
This provision allows the Minister to give instructions to the Division on matters concerning the Principal Authority. However, the Minister's instructions cannot alter any rights, responsibilities, or legal protections already defined in the law. This ensures that the Minister's guidance remains within the boundaries of the existing legal framework.
Show Original Legal Text
- The Division shall be subject to the Minister's directives on matters at the level of the Principal Authority, provided that the directive does not vary, amend, detract or add to any right, liability, sanction, remedy or defence under this Act.
Substantive Rules on Information
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Information
Plain Language Summary
This section clarifies what kind of information is covered under the Act, focusing on factual statements that can be proven true or false. Opinions, commentary, and good-faith interpretations are not considered facts. Public criticism of government officials and dissatisfaction with public services are also protected and will not be considered false information under the Act. This means people generally won't be penalized for expressing their views or critiquing the government.
Show Original Legal Text
(1) In this section, the rules on information shall unless otherwise stated, apply only to misinformation, disinformation and other information.
Except for hate speech under this Act, a person shall only be liable under this Act for the communication or publication of information relating to or about facts which contravenes the Act.
Under this Act, a fact means a statement or material which can be verified as true or false.
Unless otherwise provided in this Act, the following does not constitute a fact under the Act:
- (a) opinions about facts including personal views, beliefs or value judgments; - (b) commentary about facts including analysis, criticism or editorial content; and - (c) an objective interpretation of facts in good faith and supported by evidence.Unless otherwise provided in this Act, the following information shall not impose liability on a person for false information and other information:
- (a) public criticism or scrutiny of a governmental official or public officer, relating to the management of a public office or the official duties of a government official or public officer; - (b) public criticism or dissatisfaction about the provision of a service to the public; - (d) clearly identified partisan news; - (e) subject to provisions on hate speech, information that considered only insulting or disrespectful; and - (f) true but imprecise information about a civil wrong or commission of a crime.
(6) For the purposes of this Act, clearly identified partisan news means information which is biased in its framing, leaning towards a political ideology or adopts subjective facts.
(7) For the purposes of this Act, acts are considered only insulting or disrespectful if they are personally rude or unpleasant, but do not offend a group of people and do not incite violence or threaten or expose that person to threat of harm.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Communication of Information
Related Key Concerns
Plain Language Summary
This section defines how information is considered "communicated" under this law. It includes making information available through the internet, text messages, and broadcasts. The definition covers various forms of content, including AI-generated material. However, internet service providers and similar services are generally not considered publishers of information unless it is generated by their algorithms.
Show Original Legal Text
- (1) In this Act, for the purpose of false information, hate speech and other information, a statement or material relating to or about facts is communicated if it is made available to one or more persons in the Republic by means stated in subsection (2).
(2) A statement or material relating to or about facts or is also communicated if it is made available to one or more end-users in Republic on or through:
- (a) internet; or
- (b) MMS or SMS
- (c) television or radio broadcast
(3) A statement or material relating to or about facts communicated under subsection (1) and (3) shall include written words, sounds, signs, objects, images, videos including Artificial Intelligence generated statements or materials.
(4) Except for the algorithmically generated information, a person does not publish information in the Republic merely by doing any act for the purpose of, or that is incidental to, the provision of:
(a) an internet intermediary service;
(b) a communication service;
(c) a service of giving the public access to the internet; or
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
False information
Related Key Concerns
Plain Language Summary
This provision defines "false information" as statements that are wrong, misleading, or deceptive, and can be disproven with verified facts. Even partially true statements can be considered false if they are misleading due to omitted information. The person claiming information is false has the responsibility to prove it.
Show Original Legal Text
- (1) A statement or material is false if it is wrong, fake, misleading, deceptive, doctored, whether wholly or in part, and whether on its own or in the context in which it appears.
(2) Information is false only if it can be disproven by verified and factual contrary information.
(3) A statement or material is false even if it is a partial disclosure of truth provided that the omission makes entire statement or material more misleading than true.
(4) Unless otherwise provided in this Act, the burden that the information is false lies on the person alleging that the information is false which may include the Division where appropriate.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Control over the information
Plain Language Summary
This section clarifies who is responsible for information shared, focusing on those who have control over it. You have control if you originally shared the information, directed its publication, can change the content, or can publish/remove it. Simply resharing information already public generally doesn't give you control, but employees may be liable alongside employers who authored the content.
Show Original Legal Text
- (1) Except for hate speech under this Act, a person shall be made liable for communication of information if that person has control over the information.
(2) A person has control over the information if that person:
- (a) is the original disseminator of the information;
- (b) is not the original disseminator of the information but retains authorship of the original information; or
- (c) used, instructed or guided another person or instrument to make the publication; or
- (d) is able to substantially dictate how that content of that information should be framed, edited or published; or
- (e) is able publish or remove content relating to the information without recourse to the original author; or
- (f) threatens, blackmails or compels another person to release the information.
(3) Nothing in this Act shall prevent an employee from being jointly or severally liable with an employer who retains authorship of the information published in the course of employment.
(4) Except for hate speech under this Act, a person who republishes information that is already within the public domain does not have control over the information.
(6) For the avoidance of doubt, internet intermediaries do not have control over the information except where a Direction, Order or Compliance Warning is issued against it.
(7) Despite subsection (6), the Division may submit a request for content restriction on an internet intermediary where the Division decides that it is necessary and proportionate in accordance with section 9 of this Act.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Exclusion of persons
Plain Language Summary
This section clarifies who can be held responsible for violations of this Act. Children under 12 are exempt from liability, but adults who encourage them to break the rules can be held accountable. Parents or guardians may be responsible for a child's actions if they did not properly supervise them. People who are forced or threatened into releasing non-compliant information are also protected from liability.
Show Original Legal Text
- (1) Subject to the Juvenile Justice Act, 2003 (Act 653), a person below the age of 12 shall not be liable for non-compliance or breach of conduct under this Act.
(2) A person capable of being made liable under this Act who guides, advises, instructs, commands, requests or blackmails a person below the age of 12 to engage in conduct that is inconsistent with this Act shall be liable as the original disseminator of the information.
(3) Except for conduct criminalised, a parent or guardian of a child of below the age of 12 years or a person responsible for making decisions on behalf of that child shall be made liable for the child's non-compliance on proof that the parent failed to take reasonable steps to supervise the child's activities.
(4) Unless otherwise stated, a person who is threatened, blackmailed or compelled to release information that is non-compliant with, or contravenes the Act shall not be made liable under this Act.
Prohibition on publication of false information
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Misinformation and Disinformation
Related Key Concerns
Plain Language Summary
This provision makes it illegal to spread misinformation and disinformation. Misinformation is defined as false information, regardless of intent, while disinformation is intentionally misleading. Individuals can be held responsible if they publish false information that harms the public interest and are subject to penalties detailed elsewhere in the law.
Show Original Legal Text
- (1) Misinformation and disinformation is prohibited in Republic.
(2) Misinformation is the publication of false or inaccurate information regardless of the intention to mislead.
(3) Disinformation is the publication of false or inaccurate information intended to mislead, manipulate or guide people in a particular direction.
(4) A person shall be made liable for misinformation or disinformation if:
- (a) the information is a false or inaccurate statement or material relating to or about facts;
- (b) that person is not excused from liability under the Act;
- (c) the information is prejudicial to public interest under section [].
(5) A person liable for misinformation or disinformation shall be subject to the imposition sanctions and grant of remedies provided in section [].
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Due diligence of the certainty or accuracy of information
Related Key Concerns
Plain Language Summary
This provision requires anyone publishing factual information to perform due diligence to ensure its accuracy. Certain groups, such as media outlets and politicians, are held to a higher standard of verification. However, individuals are not liable if due diligence would not have revealed the information was false, and all people are encouraged to publish reliable information to combat inaccuracies.
Show Original Legal Text
- (1) All persons publishing information that concern statement or material relating to or about facts shall conduct necessary due diligence and verify the certainty or accuracy of the information.
(2) Despite subsection (1), media houses, journalists, politicians, academics, persons with notoriety as influencers, persons known as public and social media commentators, persons of the class as celebrities, popular product brands and multinational companies shall be held to a higher standard of due diligence.
(3) It is a defence under this Act that due diligence could not have revealed that the information was false or inaccurate.
(4) A person shall not be liable under this Act by reason only that they did not conduct necessary due diligence of the certainty or accuracy of the information.
(5) All persons are encouraged to publish reliable information to discredit false or inaccurate information.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Business misinformation or disinformation
Related Key Concerns
Plain Language Summary
This section prohibits profiting from the creation and spread of false information. It stops people from making or sharing false information for money or other advantages. Those who repeatedly publish false information that harms the public may be considered to be in the business of spreading misinformation and will face penalties.
Show Original Legal Text
- (1) A person shall not engage in the business of making, arranging, publishing false information gratuitously or for financial reward, whether realised or not, or any other benefit or gain.
(2) A person shall not solicit, receive or agree to receive any financial or other material benefit as an inducement or reward for providing any service, knowing that the service is or will be used in the communication of information that contravenes this Act.
(3) A person who earns a reputation publicly for constantly and incessantly publishing false information which affects the public interest shall be presumed to be engaged in the business or object of misinformation or disinformation.
(4) A person who engages in conduct contrary to subsection (1), (2) and (3) shall be subject to sanctions and/or be required to provide remedies under Part VII of this Act.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Public interest
Related Key Concerns
Plain Language Summary
This provision states that individuals can only be held responsible for spreading misinformation or disinformation if it's in the public's best interest to do so. An action is considered to be in the public interest if it is deemed necessary or advantageous. This limits the scope of liability under the Act.
Show Original Legal Text
- (1) A person shall only be made liable for misinformation or disinformation under this Act where it is in the public interest to do so.
(2) For the purposes of this Act, and without limiting the generality of the expression, it is in the public interest to do anything if the doing of that thing is necessary or expedient:
- (b) to protect public health, the public trust, or public finances, public welfare, or to secure public safety, public morals or public order;
- (c) in the interest of friendly relations of Republic with other countries;
- (d) to prevent any distorted influence of the outcome of presidential, parliamentary, district assembly elections, unit committee elections, referendum or other elections supervised by the Electoral Commission.
- (e) to prevent incitement of feelings of enmity, hatred or ill-will between different groups of persons; or
- (f) to prevent a diminution of public confidence in the performance of any official duty or function of, or in the exercise of any power by a public institution.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Misinformation or disinformation by or against the Government
Related Key Concerns
Plain Language Summary
This section grants the government the power to combat misinformation and disinformation, with some limitations. The government cannot use these powers against misinformation related to the ruling political party, but political parties retain their own rights. The government also cannot pursue action solely based on insults to high-ranking officials. The government bears the burden of proof in misinformation cases.
Show Original Legal Text
- (1) Subject to constitutional protections, the Government shall have enforceable rights against any person, and shall have rights enforced against it, in respect of misinformation or disinformation under this Act.
(2) The Government shall exercise its enforceable rights under the Act provided that the misinformation or disinformation does not concern the political party of the incumbent Government.
(3) Notwithstanding subsection (3), a political party shall retain enforceable rights in its own respects as an entity under this Act.
(4) The Government shall not exercise any enforceable rights under this Act by reason only that the misinformation or disinformation are merely insulting to the President, VicePresident or the Cabinet, as defined under section 17(7) of this Act.
(5) Subject to protections under the Constitution, an action for misinformation or disinformation at the instance of the Government shall lie against a private individual or private entity.
(6) For the avoidance of doubt, the respective offices of the Government affected by the misinformation or disinformation shall bear the burden of proof in any case as may be.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Misinformation or disinformation against public institutions
Related Key Concerns
Plain Language Summary
This provision allows public institutions to sue individuals or organizations that spread false information about them. This right is subject to constitutional protections, such as freedom of speech. The goal is to protect public institutions from damage caused by misinformation and disinformation.
Show Original Legal Text
- (1) Subject to constitutional protections, public institutions shall have enforceable rights against any person, and shall have rights enforced against, in respect of misinformation or disinformation under this Act
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Misinformation or disinformation by or against a government official or public officer
Related Key Concerns
Plain Language Summary
This section of the Act grants government officials, public officers (including judges), and election candidates the right to sue individuals who spread misinformation or disinformation about them, affecting their office or personal life. It also makes them liable if they spread misinformation. For election candidates, this protection begins when they publicly announce their candidacy, are nominated, or are publicly known to be contesting an election.
Show Original Legal Text
- (1) All governmental officials or public officers shall have enforceable rights against any person, and shall have rights enforced against, in respect of misinformation or disinformation under this Act for conduct against his or her office and in his or her own personal capacity.
(2) A member of the judiciary, as a public officer, shall have enforceable rights against his or her office and in his or her own personal capacity.
(3) A candidate for elections shall have enforceable rights against any person, and shall have rights enforced against, in respect of misinformation or disinformation under this Act.
An individual is a candidate for elections, if:
- (a) he or she has publicly declared their candidature in presidential, parliamentary, district assembly elections, unit committee elections or other elections supervised by the Electoral Commission;(b) has been nominated or chosen as a candidate in any of the stated elections; or
(c) is publicly known to contest in any of the stated elections.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Misinformation or disinformation against by or against a private individual or private entity
Related Key Concerns
Plain Language Summary
This provision grants private individuals and entities legal rights concerning misinformation or disinformation. It allows them to sue those who spread false information about them. Conversely, it also makes them liable if they spread misinformation or disinformation themselves.
Show Original Legal Text
- (1) Private individuals or private entities shall have enforceable rights against any person, and shall have rights enforced against them, in respect of misinformation or disinformation under this Act.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
False or inaccurate public health information
Related Key Concerns
Plain Language Summary
This provision prohibits the publication of false or inaccurate information regarding public health matters, including pandemics. It requires individuals and media outlets to verify public health information before publishing it. Those who publish health information are responsible for proving its accuracy.
Show Original Legal Text
- (1) No person shall publish false or inaccurate information about public health in the Republic, public health crisis occurring in the Republic or a pandemic declared by the World Health Organisation.
(2) A person shall be deemed to have published false or inaccurate health information where that person publishes unverified statements about public health administration in the Republic, unsubstantiated medical statements or advice, unproven accounts about the potency or otherwise of a drug or medicine approved by the relevant authorities.
(5) Media houses, journalists and persons of the status of celebrities or influencers and content creators who publish public health information shall be required to undertake proper public health fact-checking in accordance with guidelines prescribed by the Division.
(6) For the purposes of public health information, the burden of proof of truth or accuracy of the information lies on the person accused of publishing false or inaccurate information or the offending party.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
False or inaccurate election information
Related Key Concerns
Plain Language Summary
This provision makes it illegal to publish false information about the Electoral Commission, election procedures, or election results if it could impact the outcome. It also prohibits working with foreign entities to spread such misinformation. Additionally, it restricts the publication of false scandals about candidates, although minor inaccuracies in otherwise true information are permitted.
Show Original Legal Text
31 . (1) No person shall publish false or inaccurate information about the Electoral Commission, pre-election processes, voting day, collation of election results and election results which is likely to influence or influences the outcome of a general election to the office of President, a general election of Members of Parliament, a by-election of a Member of Parliament, or a referendum.
(2) A person shall not connive, collaborate partner directly or indirectly with a country or foreign organisation to publish false or inaccurate election information about the Republic's Electoral Commission, pre-election processes, voting day, collation of election results and election results.
(3) The Division shall through the Ministry of Foreign Affairs and Regional Integration swiftly engage diplomatic channels of the foreign country in question over the allegations of false information.
(4) Subject to subsection (5), no person shall publish false information whether financial, political or sexual scandal about a candidate for elections, or a false allegation relating to a statement made or a stance taken by a candidate, which is likely to influence or influences the outcome of presidential, parliamentary, district assembly elections, unit committee elections, referendum or other elections supervised by the Electoral Commission.
(5) Information on a financial, political or sexual scandal about a candidate for elections or allegations relating to a statement made or a stance taken by that same candidate is not false by reason only that the information was largely true but part of the information was imprecise, and provided that the imprecision does not make entire information substantially untrue.
(6) Without limiting the effect of section 18(4), misinterpretation of a statement relating to or about facts concerning the Electoral Commission, pre-election processes, voting day, collation of election results election results which is likely to influence or influences the outcome of a general election to the office of President, a general election of Members of
(7) Media houses, journalists and persons of the status of media influencers and content creators who publish election information shall be required to undertake proper election information fact-checking in accordance with guidelines prescribed by the Division.
(8) For the purposes of election information, the burden of proof of truth or accuracy of the information lies on the person accused of publishing false or inaccurate information or the offending party.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Sensationalism which leads to inaccuracy
Related Key Concerns
Plain Language Summary
This provision prohibits media outlets from publishing accurate information with substantial embellishments that make it inaccurate. Sensationalism is defined as exaggerating information to evoke strong emotions beyond what the facts warrant. The provision clarifies that sensationalism is not prohibited if it is creative expression.
Show Original Legal Text
- (1) The publication of otherwise accurate information by a media house with substantial embellishments to a high degree that it causes the information that is not fiction or satire to become inaccurate is prohibited.
(2) The test of what is sensational is whether the information is overly exaggerated and evokes strong emotion and sentiment where the actual statement or material do not reasonably evoke such emotions.
(3) For the avoidance of doubt, sensationalism is not prohibited, and this provision shall not be interpreted to stifle creative expression.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Evidence of Misinformation and Disinformation
Plain Language Summary
This provision specifies what can be used as evidence of misinformation or disinformation. This evidence can include the false information itself, medical or financial records, court or electronic records, negative media coverage, witness statements, documented correspondence, and video evidence. This list provides a range of options for proving that misinformation or disinformation has occurred.
Show Original Legal Text
(1) An aggrieved person or issuing party may submit the following as evidence of misinformation or disinformation:
(a) evidence of the false or inaccurate information itself
- (b) medical records(c) financial records
(d) court records
(e) electronic records
(f) electronic records showing negative media coverage
(g) witness statements
(
- documented evidence of correspondence
- (i) video evidence
- (j) expert evidence
- (k) surveys, polls, analytics
- (l) metrics
- (m) reports from regulatory bodies or fact-checking organisations
- (n) scientific studies
- (o) government data
- (p) any other relevant evidence5
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Communication made outside the territory
Related Key Concerns
Plain Language Summary
This section clarifies when Ghanaian courts can rule on communications originating outside of Ghana. Generally, the law applies to Ghanaians living abroad or those who have lived in Ghana for a significant period. However, in cases of hate speech against Ghanaian citizens, the law can apply to anyone, regardless of their nationality or where they live.
Show Original Legal Text
- (1) Pursuant to subsection (3), the Court or Division shall jurisdiction over communication made outside the Republic which contravenes this Act.
(2) A communication is made outside the Republic if it is made by a person who is not physically present in the territory of Ghana, notwithstanding the communication was made on an online location.
(3) With the exception of hate speech under subsection (6), the Act shall have extraterritorial effect only to the extent that the offending party or respondent is Ghanaian or has been a resident in the Republic for a cumulative period of 2 years immediately preceding the date of publication of the false information, hate speech or other information.
(4) Communication or publication outside the Republic over which the Court or Division has jurisdiction under subsection (3) shall be deemed to be communication or publication of a statement or material in the Republic.
(5) Where the offending party or respondent lives outside the Republic, the Division shall engage mutual legal assistance for the enforcement of a sanction or remedy in that country.
(6) For hate speech communicated outside the Republic against a citizen of Ghana, the Act shall have extraterritorial effect regardless of the nationality or residence of the offending part or respondent.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Defences for Misinformation and Disinformation
Plain Language Summary
This provision protects individuals from being held liable for spreading misinformation or disinformation if they quickly correct the statement, retract it, and apologize. This offers a legal defense for those who make honest mistakes and take responsibility for them. The aim is to encourage prompt correction of false information without fear of legal repercussions.
Show Original Legal Text
(1) In addition to other protections provided for in this Act, a person shall not be made liable for misinformation and disinformation where:
(a) that person corrected or retracted that statement timeously and apologised;
(b) the false information was an inadvertent error and the offending party assumed responsibility ;
(c) the information is not likely to influence or did not influence the outcome of a presidential, parliamentary, district assembly or unit committee election or referendum, or other elections supervised by the Electoral Commission.
(d) under the circumstances, the statement of fact or material was not relied on or it was not likely that people will take it seriously
It shall not be a defence that the information was only inaccurate; however, it shall apply for the purposes of a reduced sanction or remedy or diminution of a sanction.
What is timeous under subsection 1(a) depends on the facts of each case.
Hate Speech and other forms of Indecent Expressions
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Prohibition of Hate Speech
Related Key Concerns
Plain Language Summary
This provision makes it illegal to communicate or spread hate speech within the country. This means that individuals are not allowed to express hateful or discriminatory views through speech or other forms of communication. The aim is to prevent the spread of harmful ideologies and protect vulnerable groups from discrimination and abuse.
Show Original Legal Text
- The communication or dissemination of hate speech in the Republic is prohibited.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Definition of Hate Speech
Related Key Concerns
Plain Language Summary
This provision defines hate speech as communication that uses discriminatory language to target groups based on characteristics like race, religion, or sex. This includes actions that vilify, threaten, harass, or incite hatred/violence. Even if unintended, communication that affects a group's dignity or reputation can be considered hate speech, including content circulated as entertainment.
Show Original Legal Text
(1) Hate speech means any communication in speech, writing, behaviour or expression that uses pejorative or discriminatory language which:
- (a) vilifies, threatens, harasses, degrades, stigmatises, humiliates, discriminates or(b) promotes negative feelings, hostility, attitudes or perceptions or
(c) incites hatred or violence
towards a group or class of people based on their race, ethnicity, colour, descent, religion, sex, background, other identity factor.
- Hate speech may be based on facts, prejudice, bias, generalisations or stereotypes.
A factual statement which incites threats, hatred, violence towards a group or class of people based on their way of life constitutes hate speech.
Provided the hate speech affects an individual or group's dignity, security, wellbeing, reputation and status in society, it is immaterial that the offending party did not intend the consequences of his or her actions.
Communication circulated as means of entertainment in a movie, song, parody, skit or as a satire that meets the threshold of hate speech in subsection (1) or (3) constitutes hate speech under this Act.
A Complaint in respect of hate speech may be brought by one or more persons.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Communication of Hate Speech
Related Key Concerns
Plain Language Summary
This provision extends existing rules for communicating information under Section 18 to also include hate speech. This means that any requirements about how information is shared, documented, or handled under Section 18 will now also apply when dealing with instances of hate speech. The goal is likely to ensure consistent and appropriate handling of hate speech reports or incidents.
Show Original Legal Text
- The requirements of communication of information under section 18 shall apply to hate speech.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Control over the communication of Hate Speech
Plain Language Summary
This provision broadens the responsibility for hate speech to include individuals who have control over its communication. This includes the original source, those who republish it, or those who can influence or remove the content. The aim is to hold accountable those who facilitate the spread of hate speech, even if they are not the original authors.
Show Original Legal Text
(1) A person shall be made liable for communication of hate speech if that person had control over the communication.
A person has control over communication of hate speech if that person:
(a) is the original disseminator of the communication;
(b) disseminates, republishes or reproduces the communication;
- (c) is not the original disseminator of the communication but retains authorship of the original communication; or - (d) used, instructed or guided another person or instrument to make the communication; or - (e) is able to substantially dictate how that content of that communication should be framed, edited or published; or - (f) is able communicate or remove content relating to the communication without recourse to the original author; or - (g) threatens, blackmails or compels another person to release the communication.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Hate Speech that incites genocide or aggravated violence
Related Key Concerns
Plain Language Summary
This provision makes hate speech that incites genocide a criminal offense, as defined by the Criminal Offences Act. It also criminalizes hate speech that is likely to incite or does incite aggravated violence, which is defined as violence that could lead to serious harm. Penalties for inciting aggravated violence will be imposed according to a specified section of this Act.
Show Original Legal Text
- (1) In accordance with section 49A of the Criminal Offences Act, hate speech that incites genocide is a criminal offence punishable under the Criminal Offences Act.
(2) Hate speech that that is likely to incite aggravated violence or incites aggravated violence is a criminal offence under this Act and which sanction shall be imposed in accordance with section [] of this Act.
- For the purpose of this section, violence is aggravated if it is violence that is heightened in a way that leads to or is capable of leading to serious harm motivated by
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Evaluation of hate speech
Plain Language Summary
Only courts or a specific division can officially evaluate if something is hate speech. To decide if a communication is hate speech, they must consider factors like the content, tone, context, and potential impact of the message. They will also assess the purpose of the communication, who it targets, and how serious the communication is. This evaluation determines any consequences or remedies.
Show Original Legal Text
- (1) Only the Courts or the Division have the mandate to substantively evaluate a communication alleged to be hate speech.
(2) In evaluating whether a communication amounts to hate speech to establish liability and the applicable sanctions or remedies for it, the Division or the Court shall determine whether the communication incites hatred or violence by looking at:
- (a) the entire content of the communication;
- (b) the tone and context of the communication;
- (c) the potential impact of the speech in terms of reach;
- (d) the purpose of the communication;
- (e) the people who are targeted; and
- (f) nature and gravity of the communication.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Other forms of indecent expressions
Related Key Concerns
Plain Language Summary
This section outlaws certain indecent expressions that target groups, such as ethnic slurs and inflammatory statements that could provoke violence. The same rules for managing and controlling hate speech will also apply to these indecent expressions. In most cases, the standards used to evaluate hate speech will also be used for indecent expressions. Unless stated otherwise, references to hate speech in the bill also include these other forms of indecent expression.
Show Original Legal Text
(1) The following indecent expressions which do not incite hatred but target a group of people are prohibited:
- (a) ethnic slurs and derogative commentary of a group of people; and - (b) inflammatory statements may reasonably provoke violence of a group of people.
(2) The communication and control over the communication for hate speech under sections 39 and 40 respectively shall apply to indecent expression under this section.
(3) In evaluating whether a communication is an indecent expression, except for determining that the communication incites hatred or violence, the criteria in section 42(2) shall apply.
(4) Unless otherwise stated, and except for provisions under sections 37 to 42, reference to hate speech shall include other forms of indecent expression.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Liability and enforceability for hate speech and other forms of indecent expressions
Related Key Concerns
Plain Language Summary
This provision states that anyone, including individuals, private organizations, government bodies, and public officials, can be held responsible for hate speech. This means that all persons and institutions are subject to potential legal consequences for engaging in or disseminating hate speech. The provision aims to create broad accountability across all sectors of society.
Show Original Legal Text
- (1) All persons including a private individual or private institution, a public institution or a Government or public official may be liable for hate speech.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Guidelines and code of ethics on hate speech and other forms of indecent expressions
Plain Language Summary
This provision states that assessments of hate speech and indecent expression should take into account guidelines from the National Peace Council. It also specifies that several codes of ethics, including those from media organizations, journalistic associations, and the Commission on Human Rights & Administrative Justice, should be considered. This aims to provide a comprehensive framework for evaluating and addressing such expressions.
Show Original Legal Text
- (1) The Guidelines on Hate Speech and other forms of Indecent Expressions issued by the National Peace Council shall be instructive in assessing hate speech and other forms of indecent expressions.
(2) The following Guidelines and Code of Ethics shall be given due consideration in assessing hate speech and other forms of indecent expressions:
- (a) National Media Commission Guidelines for Political Journalism;
- (b) National Media Commission Guidelines for Local Language Broadcasting;
- (c) The Ghana Journalists Association Code of Ethics;
- (d) The Ghana Independent Broadcasters Association Code of Conduct;
- (e) Private and Newspaper Publishing Independent Association Code of Ethics; and
- (f) Commission on Human Rights & Administrative Justice Code of Conduct for Public Office Holder.
Other Information
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Disclosure of private facts
Related Key Concerns
Plain Language Summary
This provision prohibits the disclosure of private information that is not publicly known. A person may be held liable if they reveal private facts in a way that identifies the individual and is considered offensive or embarrassing. The law also considers whether the information disclosed is of legitimate public concern or newsworthy.
Show Original Legal Text
- (1) No person shall disclose a private fact about a person's life that is not generally known to the public or publicly available.
(2) A public disclosure is any direct or indirect publication of information relating to or about facts in respect of a person, including commentary about private facts, opinions about private facts, innuendos and insinuations.
(3) A person shall be liable where the information and circumstances of its disclosure clearly identifies a person with or without a publication of name or pseudonyms or photo or description of a person to whom the information reasonably relates.
(4) No person shall be made liable under this section unless the disclosure of the private fact is deemed offensive, repulsive, embarrassing or shameful to a reasonable person.
(5) In making a determination in subsection (3), the Division or the Court shall weigh the legitimate public concern or newsworthiness of the information.
- (a) reveals the commission of a civil wrong under the laws of the Republic;
- (b) relates to a private fact about a government official or public officer which is likely to adversely affect national security, public interest, public trust, public safety, public order or public security;
- (c) it reveals a public health risk to other persons in respect of an infectious disease spread by contact or touch, disclosed by persons with the Division to disclose under the Public Health Act, 2012 (Act 851); or
- (d) concerns the welfare of a child.
(7) Where the information is of legitimate public concern or is newsworthy, a person who discloses the information shall only disclose what is necessary in the public interest, and such person may be liable for disclosing partly private facts which were not necessary in the public interest.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Definition of private facts
Related Key Concerns
Plain Language Summary
This provision defines "private facts" as intimate details about a person's life that are not widely known and are meant to be kept private, including details about family, health, finances, and relationships. However, information that is already public, such as details in official records or about criminal activity, births, deaths, and marriages, is not considered private. This definition helps to establish what information is protected under privacy laws.
Show Original Legal Text
(1) A private fact is an intimate detail of a person's life that is not generally known and is expected to be kept private and shall include facts about:
- (a) family life;(b) physical or mental health;
(c) health choices or decisions;
- (d) personal finances unless there is a duty to declare or there is suspicion of illegitimacy or illegality; - (e) relationships unless abusive or exploitative; and - (f) personal choices that do not personally affect any other person.
(2) The following information shall not be considered private facts under this part:
- (a) information held in public or official records;
- (b) information about the commission of a crime except that a person shall not publish obscene material that relates to the commission of a crime pursuant to sections 66, 67 and 68 of the Cybersecurity Act;
- (c) information about the birth or death of a person;
- (d) information about the celebration or dissolution of a marriage;
- (e) information about one's educational background, education or professional and academic achievements;
- (f) information about a person's admission as a member to a recognised society in the Republic; and
- (g) information about the employment, profession, work or vocation of individual unless it concerns a matter of national security under the Securities and Intelligence Act, 2020 (Act 1030) (Securities and Intelligence Act).
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Publication of facts
Related Key Concerns
Plain Language Summary
This section defines when sharing private information becomes a public disclosure. It states that if private information is shared through approved channels and becomes known to at least one person, it's considered public. Even if the information was already known, it's still a public disclosure if the person didn't consent to it being shared, and they can withdraw their consent at any time.
Show Original Legal Text
- (1) A disclosure is public if it published by whatever means of communication adopted under section 18 and that information becomes known by one or more persons.
(2) Information is under the control of a person in the manner stated in section 20 of this Act.
(3) It shall be immaterial that as a result of conduct of the aggrieved person in the society, or because the information was known by another person or group of persons, the publication could not have been a public disclosure of private facts.
The person to whom the private fact relates should not have consented to the disclosure of the information.
'Consented to information' under this part means freely disclosed the information or agreed to the disclosure of specific information for a specific purpose or duration.
Nothing shall preclude a person from revoking consent at any time.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Entertainment
Related Key Concerns
Plain Language Summary
This provision prevents mass media outlets from revealing private information about individuals for entertainment purposes. This includes using private facts in parodies, skits, or satires. The goal is to protect people's privacy and prevent the exploitation of their personal information for public amusement.
Show Original Legal Text
- A person shall not disclose a private fact in the name of entertainment in mass media whether as a parody, skit or satire.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Private facts of private individuals, government officials, public officers, politician and celebrities
Related Key Concerns
Plain Language Summary
This provision addresses the publication of private information. It states that individuals can be held liable for publishing deeply personal facts about private individuals, government officials, public officers, politicians, and celebrities. Liability arises if the information doesn't affect national security, public interest, public trust, public safety, public order, or public security.
Show Original Legal Text
- (1) A person may be liable for the publication of private facts about an individual or group of individuals where that information is deeply personal and does not or is not likely to adversely affect national security, public interest, public trust, public safety or public order or public security.
(2) A person may be liable for the publication of private facts about a government official or public officer or politician where that information is deeply personal and does not or is not likely to adversely affect national security, public interest, public trust, public safety or public order or public security.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Persons who can claim
Related Key Concerns
Plain Language Summary
Generally, only individuals whose private facts were disclosed can file a claim. There are exceptions where the Division can file on behalf of someone or act as a representative. The estate of a deceased person can also sue if private facts about the deceased were published.
Show Original Legal Text
(1) Subject to the Division's power to submit Complaints on behalf of aggrieved persons or issuing parties and actions in representative capacity, only individuals affected by the publication may make a claim for disclosure of private facts.
An estate of a deceased may hold a person liable for publication of private facts about a deceased person.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Data privacy breaches
Plain Language Summary
This provision ensures that people can still take legal action under the Data Protection Act if their personal information is exposed or misused. It confirms that the bill does not prevent individuals from seeking remedies for data breaches. This protects the rights of individuals to address violations of their data privacy.
Show Original Legal Text
- Nothing shall bar a person from pursuing a remedy for breach of protection of personal or special personal data protection under the Data Protection Act.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Publication of confidential information concerning the Republic
Plain Language Summary
This provision makes it illegal to publish confidential information about the Republic. Confidential information includes state-owned or state-held data that isn't public, isn't meant for public release, and contains sensitive material impacting public security, welfare, or diplomacy. This aims to protect sensitive government information from unauthorized disclosure.
Show Original Legal Text
(1) A person shall not publish confidential information concerning the Republic.
Information is confidential to the Republic under this section if that information:
- (a) belongs to the State or is in the custody of the State; and - (b) is not publicly available as to the precise content; and - (c) is not meant to be shared with the public at a specific time or indefinitely; and - (d) contains sensitive material that affect or concern public security or public welfare or diplomatic interests.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Categories of protected confidential information
Plain Language Summary
This provision protects certain confidential information belonging to the State. This includes details from closed-door public meetings, information that could compromise criminal investigations, communications within the Cabinet, and sensitive government economic data and plans. The aim is to protect the integrity of ongoing investigations, internal government discussions, and sensitive economic strategies.
Show Original Legal Text
(1) Information that is confidential to the State is protected and includes the following:
- (a) information about public proceedings held in camera; - (b) information relating to criminal investigations which would prejudice the outcome of the case;(c) information relating to Cabinet communications;
(d) information about sensitive economic Government data and plans;
Information that is meant to be eventually released is protected information where its premature release affects public security, public welfare or diplomatic interests.
Information is not protected if it intended to expose the commission of a crime under the laws of the Republic.
Information that is subject to the Right to Information Act is not protected.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Publication of protected information
Related Key Concerns
Plain Language Summary
This provision defines when information is considered publicly disclosed. A disclosure is public if it is communicated through methods outlined in section 18 and becomes known to at least one person. The provision also references section 20, clarifying how information is considered to be under a person's control.
Show Original Legal Text
(1) A disclosure is public if it published by whatever means of communication adopted under section 18 and that information becomes known by one or more persons.
Information is under the control of a person in the manner stated in section 20 of this Act.
Operational Framework
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Complaint to the Division
Related Key Concerns
Plain Language Summary
This section explains how to file a complaint with the Division if your rights under this law have been violated. It allows someone to file on behalf of another person if they have a connection to them. The Division will then review the complaint to see if it has the authority to address it and if the complaint has valid claims.
Show Original Legal Text
(1) Any person who has enforceable rights under this Act may file a Complaint to the Division in the manner specified by the Division in respect of rights that have been or, is likely to be contravened in relation to him or her.
A Complaint may be filed on behalf of an aggrieved person by the issuing party provided that:
(a) aggrieved person is identified in the complaint; and
- (b) the issuing party has a relational connection with the aggrieved person.A Complaint may be filed by a person as a claim in respect of public rights.
A child shall not be competent to file a Complaint by him or herself and shall act by his or her next friend;
A child shall not be competent to respond to a Complaint by him or herself and shall act by his guardian ad litem .
Upon receipt of a Complaint, the Division shall within 2 working days make a preliminary inquiry into whether it has jurisdiction into the Complaint and shall determine whether a complaint has any merit.
A Complaint has merit where it contains an allegation of fact in respect of noncompliance and breach of conduct under the Act.
(9) Every Complaint shall indicate the following:
- (a) the basis for the assertion that the information or communication is unlawful;
- (b) the description of the allegedly unlawful information or communication; and
- (c) the remedy or sanction sought.
(10) Despite subsection (1), a person may file an anonymous or person identifiable Report to the Division setting out an account of non-compliance or breach of conduct under the Act only for the information of the Division, and the Division shall exercise its discretion depending on facts and evidence in the Report.
(11) All persons present in the Republic may file a Complaint or Report.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Response
Related Key Concerns
Plain Language Summary
If the Division finds a complaint has merit, the party at fault will be notified and given two working days to respond. The response can either accept the complaint, accept it with an explanation, or deny it. If the party denies the complaint, they must explain why the information or communication was lawful and describe the specific content in question. If the party fails to respond in time, the Division will proceed with the complaint based on the information available.
Show Original Legal Text
- (1) Where the Division has determined that it has jurisdiction and the Complaint has any merit, the Division shall immediately forward the Compliant and a statement of its jurisdiction and merit to the offending party and the offending shall be given the opportunity to respond to a Complaint in writing or by oral presentation, whichever they choose.
(2) The offending party shall respond to the Complaint within 2 working days of receipt of the Complaint.
(3) Where the offending party does not respond within the timeframe, the Division shall proceed with the Complaint and determine the matter based on case of the aggrieved person or issuing party.
(4) Where the offending party responds to a Complaint or Report, the Response shall indicate any of the following:
- (a) whether the respondent accepts and concedes to the Complaint; or
- (b) whether the respondent accepts the complaint but has an explanation; or
- (c) whether the respondent refutes and defends the information or communication.
(5) Where a Response intends to provide a defence it shall indicate the following:
- (a) the basis for the assertion that the information or communication lawful; and
- (b) the description of the allegedly unlawful information or communication.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Jurisdiction of the Division
Related Key Concerns
Plain Language Summary
This section defines what types of cases the Division can handle regarding violations of the Act. The Division generally handles non-compliance issues, but it cannot make rulings on issues like hate speech, allegations against the government, or misinformation that could lead to criminal penalties. In some cases where the Division cannot adjudicate, it can refer complaints to the Court if they are in the public interest. Individuals can only bring cases directly to the Court if the Division does not have the authority to handle it.
Show Original Legal Text
- (1) Subject to subsection (2), the Division shall have jurisdiction over all matters of non-compliance or breach of conduct under this Act.
The Division shall not have quasi-adjudicatory jurisdiction over:
- (a) hate speech that incites aggravated violence; - (b) allegation of non-compliance or breach of conduct against the Government; - (c) allegation of non-compliance or breach of conduct filed by the Government against a person; - (d) monetary damages; and - (e) misinformation or disinformation which attracts criminal sanction.
(3) Despite subsection (2), the Division shall have referral jurisdiction in the absence of its adjudicatory jurisdiction and may submit a Complaint on behalf of a person who has enforceable rights under this Act directly before the Court, where the Division is of the opinion that the matter is relevant to the public interest.
(4) 'A matter is relevant to the public interest' if, provided the Government is not the aggrieved person:
(a) the allegation attracts a criminal penalty under this Act; or
- (b) the allegation concerns a matter that has obtained significant public traction.
(5) In all cases falling under subsection (3) and (4), the Division must be satisfied that the Complaint has merit.
- No person shall submit a case under this Act directly to the Court unless the Division does not have jurisdiction.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Findings and Decisions of the Division
Related Key Concerns
Plain Language Summary
The Division is responsible for impartially evaluating each complaint it receives. It must determine liability when justified and apply penalties that are appropriate for the violation. This ensures fair resolution of complaints and proportionate consequences for any misconduct.
Show Original Legal Text
- (1) The Division shall fairly and independently assess the merits of each Complaint submitted to it.
(2) The Division shall establish liability where it is just and right to do so and shall impose sanctions and remedies that are necessary and proportionate to the non-compliance or breach of conduct.
(4) In exceptional cases, the Division shall, depending on the severity of the Complaint and the extent of harm or threat of harm caused by the information or communication, expedite its processes for making its findings and rendering its decision.
(5) Pursuant to subsection (4), the Division shall aim to submits its findings and decisions within 5 working days of receipt of the Complaint.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Enforcement of decisions of the Division
Related Key Concerns
Plain Language Summary
The Division's decisions are legally binding on all parties involved in a complaint. Failure to comply with the Division's decisions can lead to administrative or criminal penalties. The Division has the authority to issue orders and directions to ensure its decisions are enforced.
Show Original Legal Text
- (1) The decisions of the Division shall be binding on all parties to the complaint.
(2) The decisions of the Division shall be complied by the parties to it failing which the sanctions under administrative and criminal penalties may apply.
(3) For the purpose of enforcing and giving effect to its decisions, the Division shall make such orders and directions as may be necessary in respect of the parties or other persons.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Appeal against the Division
Related Key Concerns
Plain Language Summary
This section allows individuals or authorities dissatisfied with a Division's decision to appeal to the High Court. Appeals must be filed within 30 days of the decision. The High Court can confirm, change, or overturn the Division's decision based on specific grounds, such as lack of evidence or technical impossibility of compliance. A further appeal can be made to the Court of Appeal.
Show Original Legal Text
- (1) A person who is aggrieved by finding of fact, liability or decision of the Division may appeal to the High Court against the finding of fact, liability or the decision of the Division in the person of the Authority.
A further right of appeal shall lie at the Court of Appeal only.
An appeal may only be made to High Court within 30 days of the decision on the complaint or such period as may be prescribed by Rules of Court, whichever is earlier.
(4) The High Court must hear and determine any such appeal and may either confirm, vary or set it aside a finding of fact or liability or a decision of the Division.
(5) The High Court may only set aside a finding of fact or liability or a decision of the Division on any of the following grounds on an appeal:
- (a) the respondent was not responsible for the communication or information; or
- (b) the evidence does not support the finding of fact; or
- (c) the communication or information was permissible under the Act; or
- (d) it is not technically possible to comply with the decision of the Division.
(7) Despite subsection (6), if the appellant establishes a prima facie case that it is technically impossible to comply with the decision of the Division, the High Court may direct that the decision be stayed pending determination of the appeal.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Jurisdiction of the High Court
Plain Language Summary
This section defines the High Court's authority to review decisions from lower bodies. The High Court can hear appeals on factual findings and rulings made by the Division. It also has the power to handle initial cases regarding misconduct or violations of the law that the Division cannot address, with further appeals possible to the Court of Appeal and the Supreme Court.
Show Original Legal Text
- (1) The High Court shall have appellate jurisdiction in respect of findings of fact, liability and decisions of the Division.
(2) The High Court shall have original jurisdiction in all respect of matters of noncompliance or breach of conduct under the law where the Division does not have jurisdiction.
(3) Appeals from the High Court's exercise of original jurisdiction under this Act shall lie at the Court of Appeal and a further appeal shall lie at the Supreme Court.
Sanctions and Remedies
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Sanctions and Remedies
Related Key Concerns
Plain Language Summary
This section details the penalties that may be applied if someone is found to have violated the Act. These penalties range from being ordered to correct information or stop communicating, to having content or accounts removed. The Court may also order access to be blocked, impose fines, issue cease and desist orders, suspend or revoke licenses, or even impose criminal penalties.
Show Original Legal Text
(1) Where the Court or Division makes a finding of fact and establishes liability against a person for non-compliance or breach of conduct under the Act, it may issue any of the following decisions as sanctions and/or remedies where appropriate:
- (a) a Correction Direction
(b) a Stop Communication Direction
- (c) a Removal of Communication Direction
(d) a Removal of Account Request
(e) an Access Blocking Order
(f) monetary damages
(g) Cease and Desist Order
(h) suspension or revocation of licence
- (i) an administrative penalty
(j) a criminal penalty
(3) The Court or Division may impose more than one sanction or grant more than one remedy if it is necessary and proportionate.
(4) Despite subsection (3), the Division may recommend the imposition of additional sanctions or grant of remedies to aggrieved party to the Government or public institution in respect of non-compliance or breach of conduct by a government official or public officer.
(5) Where the information or communication has been removed, deleted or retracted, nothing shall prevent the Court or Division from granting an appropriate remedy or imposing a sanction in respect of the wrong done.
(6) Nothing shall prevent the Division from publishing verified and true information to counter false information, and the requirement of verifiable information under section 40 shall apply to the Division.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Correction Direction
Related Key Concerns
Plain Language Summary
This provision allows a court to order someone who has spread misinformation, disinformation, or hate speech to issue a correction. The correction notice must state that the information was false or harmful and provide accurate information. In some cases, the person may also have to publish the correction in a newspaper or printed publication. The person issued the correction is responsible for the costs, and this does not prevent a person from seeking monetary damages in addition to a Correction Direction.
Show Original Legal Text
- (1) The Court or Division may issue a Correction Direction against a person to correct misinformation, disinformation or hate speech.
(2) A Correction Direction is one issued to a person who is liable for misinformation disinformation or hate speech, requiring the person to communicate in the Republic in the specified form and manner, to a specified person or description of persons (if any), and by the specified time, a notice (called a correction notice) that contains one or both of the following:
- ( a ) a statement, in such terms as may be specified, that the information is false, or that the specified material contains a false statement of fact;
- (b) a statement in such terms as may be specified, that the information is injurious to public interest or public security;
- (c) a specified statement of fact, or a reference to a specified location where the specified information or communication may be found, or both.
(2) A Correction Direction may require the person to whom it is issued to communicate in the Republic a correction notice in a specified location.
(3) Where the consequences of the misinformation, disinformation or hate speech are extreme, a Correction Direction may also require the person to whom it is issued to do one or both of the following:
- (b) to publish the correction notice in the specified manner in a specified newspaper or other printed publication of the Republic.
(4) Provided there is evidence, a person who is liable under this Act may be issued a Correction Direction even if the person does not know or has no reason to believe that the information is false or the communication amounts to hate speech.
(5) A person who is issued a Correction Direction is responsible for the costs of complying with the Direction.
(6) Nothing shall prevent a person from seeking monetary damages in addition to a Correction Direction.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Stop Communication Direction
Related Key Concerns
Plain Language Summary
This provision allows a court to order someone to stop spreading misinformation, disinformation, or hate speech. The order can require the person to remove the content and publish a correction. A "Stop Communication Direction" can be issued even if the person did not know the information was false or hateful. The person issued the direction is responsible for the costs of complying with it.
Show Original Legal Text
- (1) The Court or Division may issue a Stop Communication Direction against a person to correct misinformation, disinformation or hate speech.
(2) A Stop Communication Direction is one issued to a person who is liable under the Act, requiring the person to stop communicating in the Republic, the subject information or communication by a specified time.
(3) A Stop Communication Direction may also require the person to whom it is issued to stop communicating any statement or material that is substantially similar to the subject of the information or communication.
(4) Stop communicating, in relation information or communication, means taking the necessary steps to ensure that the statement is no longer available through verbal communication or a physical medium or the internet to end-users in the Republic.
(5) A Stop Communication Direction may also require the person to whom it is issued to do one or both of the following:
(
- a ) to communicate in Republic, a correction notice in the specified form and manner, to a specified person or description of persons (if any), and by the specified time;
(
- b ) to publish a correction notice in the specified manner in a specified newspaper or other printed publication of the Republic.
(6) Provided there is evidence, a person who is liable under this Act may be issued a Stop Communication even if the person does not know or has no reason to believe that the information is false or the communication amounts to hate speech.
(8) Nothing shall prevent a person from seeking monetary damages in addition to a Stop Communication Direction.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Removal of Communication Direction
Related Key Concerns
Plain Language Summary
This provision allows a court to order someone to remove misinformation, disinformation, or hate speech online within the Republic. The person may also be required to publish a correction notice. The person issued the order is responsible for the costs, even if they didn't know the information was false, and can still seek monetary damages.
Show Original Legal Text
- (1) The Court or Division may issue a Stop Communication Direction against a person to correct misinformation, disinformation or hate speech where necessary, in addition to other Directions under this part.
(2) A Removal of Communication Direction is one issued to a person who is liable under the Act, requiring that person remove or take down the information or communication in the Republic by a specified time from an online location.
(3) A Removal of Communication Direction may also require the person to whom it is issued to remove any statement or material that is substantially similar to the subject of the information or communication.
(4) Removal of communication, in relation information or communication, means taking the necessary steps to ensure that the statement or material is no longer available on or through the internet to end-users in the Republic.
(5) A Removal of Communication Direction may also require the person to whom it is issued to do one or both of the following:
(
- a ) to communicate in Republic, a correction notice in the specified form and manner, to a specified person or description of persons (if any), and by the specified time;
(
- b ) to publish a correction notice in the specified manner in a specified newspaper or other printed publication of the Republic.
(6) A third-party intermediary shall not be compelled to remove content of a person; however, the Court or Division may request a restriction of content in accordance with an intermediary's content restriction policy.
(7) Provided there is evidence, a person who is liable under this Act may be issued a Removal of Communication even if the person does not know or has no reason to believe that the information is false or the communication amounts to hate speech.
(8) A person who is issued a Removal of Communication Direction is responsible for the costs of complying with the Direction.
(9) Nothing shall prevent a person from seeking monetary damages in addition to a Removal of Communication Direction.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Service of Directions or Order
Plain Language Summary
This provision explains how an official Direction or Order must be delivered. It can be served either to the person named in the order, regardless of their location, or to someone they've appointed to receive it for them within the Republic. The specific methods for delivering the order will be detailed in other regulations.
Show Original Legal Text
- (1) A Direction or Order shall be served in on outside the Republic by such means as may be prescribed:
- (a) on the person to whom it is issued; or
(
- b ) on a person in the Republic that the person to whom it is issued has appointed to accept service on the person's behalf.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Non-compliance with a Direction
Related Key Concerns
Plain Language Summary
This section explains what happens if someone doesn't follow a Direction or Order from the Division. Unlicensed individuals get warnings first, but can face account restrictions or penalties after repeated non-compliance. Licensed individuals face fines and potential loss of their license for not following directions. Appealing the direction or conflicting duties does not excuse non-compliance.
Show Original Legal Text
- (1) Unless otherwise provided in this Act, a person to whom a Direction or Order is issued and served and who, without reasonable excuse, fails to comply with the Direction or Order whether in or outside the Republic, the Division shall issue a Compliance Warning.
(2) Where a person other than a person licensed by the Authority fails to comply with a direction after three Compliance Warnings, the Division may:
- (a) issue a request a Removal of Account Request in accordance with section 69 of this Act; or
- (b) issue an Access Blocking Order in accordance with section 70 of this Act; or
- (c) impose an administrative penalty in accordance with section [] of this Act.
(3) It is not a defence to under this section that:
(a) the person is subject to a duty under any written law, any rule of law, any contract or any rule of professional conduct, that prevents the person from complying with any part of a Direction or restricts the person in such compliance; or
- (b) the person has appealed against the Direction.
(4) A person licensed by the Authority who fails to comply with a Direction after three Compliance Warnings, is liable to pay to the Division, an administrative penalty of one thousand penalty units and a further one hundred penalty units for each day the default continues.
(5) A person licensed by the Authority that fails to comply with subsection (4) risks suspension or revocation of its licence.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Removal of Account Request
Related Key Concerns
Plain Language Summary
This section states that internet companies decide whether to remove accounts based on their own rules. Courts generally cannot force companies to monitor content or censor too much. However, a court can request an account be removed if the user ignores legal orders after multiple warnings, but this does not apply to politicians or public figures.
Show Original Legal Text
- (1) Only internet intermediaries shall ultimately decide whether to remove an online account in accordance with their content moderation policies.
(2) The Court or Division shall refrain from imposing duties on internet intermediaries to proactively monitor online content or intermediary liability regimes that incentivise overbroad censorship.
(3) Despite subsection (2) and (3), the Division or Court may issue a Removal of Account Request of an online account on a foreign or Ghanaian regulated internet intermediary on an online location where the Division or Court has jurisdiction, and that person has deliberately failed to comply with a Direction or Order under this Act after receiving three Compliance Warnings.
(4) Without limiting the effect of subsection (3), the Division or Court shall not request for the removal of an account of a politician or known public or social commentator.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Access Blocking Order
Related Key Concerns
Plain Language Summary
This provision allows a court to order internet service providers to block access to online content if it contains misinformation, disinformation, or hate speech that harms the Republic's international relations or falsely portrays it as violating international law. This order can only be issued if the content is being communicated within the Republic (excluding internet intermediaries) and users within the Republic are accessing the content through a licensed internet service provider. Internet service providers that fail to comply with the order after multiple warnings may face penalties.
Show Original Legal Text
- (1) An Access Blocking Order shall be issued where:
(
- a ) a person fails to comply with a Direction or Order; and
(
- b ) the subject statement or material is being communicated in the Republic by the person on an online location except an internet intermediary; and
- (c) the misinformation or disinformation or hate speech is prejudicial to the friendly relations between the Republic and other countries; or
- (d) the misinformation or disinformation unjustifiably projects the Republic as a defaulter of international law; and
- (e) the Court or Division is satisfied that one or more end-users in the Republic have used or are using the services of an internet service provider licensed by the Authority to access that online location.
(2) The Court or Division may direct the Authority to order the internet service provider to take reasonable steps to disable access to the online location (called in this section an Access Blocking order).
(3) An internet service provider that does not comply with any Access Blocking Order after three Compliance Warnings, is liable to pay to the Division, an administrative penalty
(4) An internet service provider that fails to comply with subsection (4) risks suspension or revocation of its licence.
(5) No civil or criminal liability is incurred by an internet access service provider or an officer, employee or agent of such provider, for anything done or omitted to be done with reasonable care and in good faith in complying with any access blocking order.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Monetary damages
Related Key Concerns
Plain Language Summary
This provision allows individuals or parties harmed by certain actions to seek financial compensation. Monetary damages may be awarded if other remedies are insufficient, particularly in cases involving false election information or the disclosure of private or confidential information. The types of damages awarded can include general, special, and punitive damages, with the Minister having the authority to define the scope and limits of these awards.
Show Original Legal Text
- (1) An aggrieved person or issuing party may seek monetary damages either alone or in addition to other remedies or sanctions under the Act.
(2) The Court shall grant monetary damages where the extent of the damage caused by the information or communication cannot be effectively remedied only by the other remedies or sanctions under the Act.
(3) Despite subsection (2), monetary damages may be awarded in respect of the following:
- (a) false or inaccurate election information
- (b) public disclosure of private facts
- (c) publication of confidential information concerning the Republic
(4) Monetary damages awarded under this section may be:
- (a) general
- (b) special
- (c) punitive
(5) The Minister may prescribe the scope, extent and range of monetary damages for non-compliance or breach of conduct under the Act.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Suspension or revocation of licence
Related Key Concerns
Plain Language Summary
This provision allows the Division to recommend that the Authority suspend or revoke a license if the license holder does not follow court orders or directions. This action can be taken if the person has received three warnings for non-compliance and has not paid the required penalty. This ensures licensees adhere to legal and regulatory requirements.
Show Original Legal Text
(1) The Division may recommend to the Authority, the suspension or revocation of licence of person licensed by the Authority where:
- (a) that person fails to comply with a Direction or Order of the Court or Division after three Compliance Warnings and has not paid the administrative penalty;- (c) has become notorious for publishing false or other information and the Division has issued a compliance warning.
(2) The Authority may suspend or revoke a licence on grounds of non-compliance with the directive of the Authority in accordance with the procedure set out regulations 119 and 120 of the National Communications Regulations, 2003 (LI 1719).
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Cease and Desist Order
Related Key Concerns
Plain Language Summary
This provision allows a Court or Division to issue a "Cease and Desist" order to stop someone from publishing false information. If the order is ignored, the person will face an immediate fine. This aims to prevent the spread of misinformation by giving the court power to stop it and penalize those who don't comply.
Show Original Legal Text
- (1) The Court or Division may issue a Cease and Desist order against a person who is engaged or is deemed to be engaged in the business of publication of false or other information.
(2) A person who fails to comply with a Cease and Desist order shall be subject to an administrative penalty without a Compliance Warning.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Compliance Warnings
Plain Language Summary
This provision explains how the Division issues Compliance Warnings to individuals who fail to follow a Direction or Order, or are found to be violating the Act. The warning directs the person to comply with specific instructions and expires upon compliance, further action, or if the Division deems it unnecessary. Individuals receiving a warning can respond to the Division to explain their actions.
Show Original Legal Text
- (1) The Division shall issue a Compliance Warning to a person for failure to comply with a Direction or Order within 5 working days of first issuance of the Direction or Order and in exceptional cases, 2 working days.
(2) The Division may issue a Compliance Warning where upon its own investigations or by a Report, it comes to the notice of the Division that a person is contravening the Act.
The Compliance Warning shall direct the person against whom it is issued to comply with the directions stated therein.
A Compliance Warning shall expire:
(a) upon the compliance by the person against whom it is issued; or
- (b) upon the taking of further action by the Division in respect of the matter; or - (c) where the Division decides that it is no longer necessary.
(5) A person against whom a Compliance Warning is issued may respond to the Division and justify the basis of their action or omission.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Administrative penalty
Related Key Concerns
Plain Language Summary
This provision states that if someone doesn't follow a Compliance Warning or an Order from the Division, they will be subject to an administrative penalty. This penalty applies unless another part of the law says otherwise. Essentially, it's a consequence for not adhering to official directives.
Show Original Legal Text
- (1) Except otherwise provided in this Act, a person shall be subject to an administrative penalty for failing to comply with a Compliance Warning or an Order of the Division.
(3) Unless otherwise specified, in the case of an entity, the head of the management of the entity who fails to comply with a Direction or Order after a Compliance Warning is issued three times is liable to pay to the Division, an administrative penalty of five hundred penalty units and a further one hundred units for each day the default continues.
(4) Subsection (2) and (3) shall apply without prejudice to any other sanction or remedy available to the aggrieved person or issuing party in respect of the matter;
(5) A person liable for the disclosure of public facts and publication of confidential information concerning the Republic shall pay an administrative penalty of five thousand penalty units where that person been given the opportunity to comply with the Direction or Order within 30 days of notice of first issuance.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Criminal penalty
Related Key Concerns
Plain Language Summary
This provision makes it a crime to spread false information with malicious intent if it leads to public harm, violence, fear, unrest, or public disturbance and affects the public interest. A person found guilty of this misdemeanor could face a fine between 200 to 500 penalty units, imprisonment for up to one month, or both. The law defines specific examples of what constitutes public harm, violence, and fear in relation to misinformation.
Show Original Legal Text
- (1) A person who communicates or publishes false information with malicious intent, knowing it to be false, or having reasonable belief in the falsity of the statement which causes public harm, violence, fear, unrest or public disturbance shall be liable to a criminal penalty, provided the information concerns or affects the public interest.
(2) A person who is liable under subsection (1) commits a misdemeanour shall be subject to a fine not less than two hundred penalty units and not more than five hundred penalty units, or a term of imprisonment of not more than 1 month, or both.
(3) Information causes public harm if on the evidence, the misinformation or disinformation leads to:
- (a) loss of funding for the Government or public institution;
- (b) loss of human capital including strikes;
- (c) significant reputational damage;
- (d) law suits or sanctions on that public institution; or
- (e) inability for the Government or public institution to perform its function.
(4) Information causes violence if on the evidence, the misinformation or disinformation leads to intentional or unintentional use of physical force or power, threatened or actual, against another person that either results in or has a real likelihood in injury, death or psychological harm.
Information causes fear if on the evidence, the misinformation or disinformation:
- (b) causes anxiety about the threat to a person's life or their welfare; or - (c) causes anxiety about the administration and/or management of a public institution or; - (d) signals danger about a violent or disruptive event which is not real or apparent; or - (e) creates widespread danger about an unknown or uncertain situation.
(6) Information causes unrest if on the evidence, the misinformation or disinformation when it leads to:
- (a) agitation or panic in the Republic; or
- (b) widescale protests outside or within the Republic related to or connected to the information; or
- (c) riot or unlawful assembly in the Republic; or
- (d) widespread public or private layoffs in the Republic; or
- (e) ethnic and religious division or conflict in the Republic;
(7) Information causes public disturbance if on the evidence, the misinformation or disinformation leads to:
- (a) widespread shock or mental distress in the Republic; or
- (b) widespread public uncertainty or confusion about a health risk or emergency alert in the Republic or that may affect the Republic; or
- (c) widespread anxiety about change in public policy in the Republic.
(8) A person who is liable for hate speech that incites aggravated violence commits a second degree felony offence and is liable on conviction to a fine of not less than five hundred penalty units and not more than one thousand penalty units or to a term of imprisonment of not less than three months and not more than twelve months or to both.
(9) A person commits a criminal offence under the Act where that person is liable for publication of private facts or publication of confidential information concerning the Republic and:
- (b) fails to comply with a Compliance Warning within 30 days of its first issuance.
(10) An individual who commits a criminal offence under subsection (9) commits a misdemeanour and is liable on summary conviction to a fine of not less than two hundred penalty units and not more than five hundred penalty units.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Offences by Entities
Related Key Concerns
Plain Language Summary
This provision outlines how companies and their employees can be held liable for offenses. It states that a company's state of mind can be determined by the actions and state of mind of its employees. Furthermore, managers or leaders can be held responsible if they were involved in or failed to prevent the offense.
Show Original Legal Text
- (1) Where, in a proceeding for an offence under this Act, it is necessary to prove the state of mind of an entity in relation to a particular conduct, evidence that an officer, employee or agent of the entity engaged in that conduct within the scope of the actual or apparent Division of the officer, employee or agent; and the officer, employee or agent had that state of mind, is evidence that the entity had that state of mind.
(2) Where an entity commits an offence under this Act, a person who is an officer of the entity; or an individual involved in the management of the entity and in a position to influence the conduct of the entity in relation to the commission of the offence; and who:
- (a) consented to effect the commission of the offence; is in any other way, whether by act or omission, knowingly concerned in, or is party to, the commission of the offence by the entity; or
- (b) knew or ought reasonably to have known that the offence by the corporation (or an offence of the same type would be or is being committed, and failed to take all reasonable steps to prevent or stop the commission of that offence,
commits the same offence as the entity, and shall be liable on conviction to be punished accordingly.
(3) Nothing shall bar the prosecution of an employee or worker who is personally liable for the commission of an offence under this part but is neither an officer of the entity or in charge of the management of the entity or in a position to influence the conduct of the entity in relation to the commission of the offence.
Rules affecting stakeholders
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Internet intermediaries
Plain Language Summary
This provision protects internet intermediaries from being held liable for content created by their users. Intermediaries are not responsible for illegal content they host if they did not create or modify it. They also don't have to actively monitor user content to ensure it's legal.
Show Original Legal Text
- (1) Internet intermediaries shall not be liable for third-party content in circumstances where they have not been involved in creating or modifying that content.
(2) Internet intermediaries shall not be made strictly liable for hosting third-party content which contravenes this Act.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Internet intermediaries regulated in Ghana
Plain Language Summary
This section clarifies that only internet companies regulated in Ghana are subject to this law. These companies must also ensure that their content moderation policies do not violate any part of this Act. This means that the government can only enforce this law against companies it already regulates.
Show Original Legal Text
- (1) Notwithstanding that a Removal of Account Request may be issued to all internet intermediaries, only internet intermediaries regulated by the Authority or other relevant authorities are amenable to this Act.
(2) For the purpose of this Act, except as otherwise mentioned, internet intermediaries mean Ghanaian regulated internet intermediaries throughout this Act.
(3) The content moderation policies of an internet intermediary shall not conflict with, or contravene any part of this Act.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Content restriction
Plain Language Summary
Internet intermediaries are generally not required to restrict content unless a court order has been issued. An order can only be issued if the original publisher of the content has failed to comply with a previous order and the content negatively impacts the Republic's diplomatic interests. The order must describe the content, provide evidence of its unlawfulness, and specify the time period for restriction. The section also allows individuals to flag illegal content and protects users from having their accounts removed by court order.
Show Original Legal Text
- (1) Internet intermediaries are not required to restrict content unless a Direction, Order or Compliance Warning has been issued by the Court or Division that has determined that the material contravenes this Act.
(2) Except where the internet intermediary modified the content, a Direction, Order or Compliance Warning shall not be issued against an internet intermediary unless:
(a) the third-party who published the information or communication has failed to comply with a Direction, Order or a Compliance Warning; and
(b) the information or communication negatively impacts the Republic's diplomatic interests or friendly relations with other countries.
(3) A Direction, Order or Compliance Warning issued against an internet intermediary to restrict its content must:
- (a) describe the content and provide a determination that the content is unlawful;
- (b) provide evidence sufficient to support the order; and
- (c) indicate the time period for which the content should be restricted
(4) Any sanction imposed on an internet intermediary or any remedy required of an internet intermediary must be necessary and proportionate and directly correlated to the intermediary's wrongful behaviour in failing to appropriately comply with a Direction or Order or Compliance Order.
(5) An internet intermediary who fails to comply with a Direction, Order or Compliance Warning to restrict content may be liable to:
- (a) monetary damages or
(b) administrative penalty
(6) Despite subsection (1), and subject to the content restriction policy of an internet intermediary, a person may flag illegal content or request content restriction on an internet intermediary.
(7) No internet intermediary may be compelled by the Court or Division to remove the account of a third-party.
(8) For the purpose of this section, content restriction means any act that leads to or has the effect of removing, pulling down, amending, limiting, blocking or regulating access to content or communication on mass media.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Algorithm and content moderation
Related Key Concerns
Plain Language Summary
This provision requires online media outlets and internet platforms to perform yearly human rights audits. These audits will assess how their operations, including algorithms and content moderation practices, affect human rights. Failure to conduct these audits can result in warnings and financial penalties.
Show Original Legal Text
- (1) Media houses with online locations and internet intermediaries shall be required to carry out an annual human rights due diligence to identify and address human rights impacts related to their operations, including risks and abuses linked to their algorithmic systems and content moderation or arising from their business model as a whole.
(2) The Division shall issue a Compliance Warning for failure to comply with subsection (1) and upon further failure to comply, the internet intermediary shall be liable to pay to the Division, an administrative penalty of five hundred penalty units and a further one hundred penalty units for each day the default continues.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Misinformation and disinformation risk assessment
Plain Language Summary
This provision requires government bodies, media outlets, and internet platforms to conduct annual evaluations to identify and address the risks of spreading false or misleading information through their services. Non-compliance will trigger warnings and potential fines for internet intermediaries. The goal is to ensure these entities actively work to mitigate the spread of misinformation and disinformation.
Show Original Legal Text
- (1) All Ministries, public institutions, media houses and internet intermediaries shall perform an annual misinformation and disinformation risk assessment and take corresponding risk mitigation measures stemming from the design and use of their service.
(2) The Division shall issue a Compliance Warning for failure to comply with subsection (1) and upon further failure to comply, the internet intermediary shall be liable to pay to the Division, an administrative penalty of five hundred penalty units and a further one hundred penalty units for each day the default continues.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Fact-checking
Related Key Concerns
Plain Language Summary
This provision requires media outlets, journalists, online platforms, content creators, and influencers to verify information before publishing it. Media companies and online platforms must create fact-checking departments to combat false information. Licensed individuals will need to undergo yearly fact-checking compliance checks and obtain a certification.
Show Original Legal Text
- (1) Media houses, journalists, internet intermediaries, digital advertising intermediaries, content creators and persons of the status of celebrity or influencer shall be required to fact-check before publishing information.
(2) Media houses and intermediaries shall set up fact-checking desks to counter misinformation and disinformation.
(3) Persons licensed by the Authority shall undertake annual fact-checking compliance with the Division and shall be issued fact-checking certification valid for the calendar year.
(5) Fact-checking certification shall be a prerequisite for the renewal or continued validity of licence issued by the Authority.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Training
Related Key Concerns
Plain Language Summary
This section requires government ministries, certain public institutions, media outlets, and internet companies to conduct training twice a year on the publication of false information. Failure to provide this training will result in warnings and fines. Licensed individuals must provide two years of bi-annual training to be eligible for license renewal.
Show Original Legal Text
- (1) All Ministries, and selected public institutions by Ministerial Directive, media houses and internet intermediaries shall be required to provide bi-annual in-house training on publication of false and other information under the Act.
(2) The Division shall issue a Compliance Warning for failure to comply with subsection (1) and upon further failure to comply, the entity shall be liable to pay to the Division, an administrative penalty of two hundred penalty units and a further one hundred penalty units for each day the default continues.
(3) A person who is licensed by the Authority shall not be granted a renewal of licence if that person has failed to provide two years of bi-annual training, whether or not one of the two annual trainings was provided.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Paid content
Related Key Concerns
Plain Language Summary
This provision requires digital advertising platforms, celebrities/influencers, and content creators to make sure their paid advertisements follow the rules of this Act. If they don't, they'll first receive a warning. Continued failure to comply will result in a fine, with additional fines for each day the violation continues.
Show Original Legal Text
- (1) A digital advertising intermediary, persons of the status of celebrity or influencer, content creator must take reasonable steps to ensure that any paid content does not lead to non-compliance or breach of conduct under this Act.
(2) The Division shall issue a Compliance Warning for failure to comply with subsection (1) and upon further failure to comply, the person shall be liable to pay to the Division, an administrative penalty of one hundred penalty units and a further one hundred penalty units for each day the default continues.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
National security interventions
Plain Language Summary
The National Security Council and National Intelligence Bureau cannot investigate, arrest, or detain individuals for simply not following this law. The Police can only get involved if someone is being prosecuted for a crime under this law, or if police presence is needed to enforce a penalty. This section clarifies when different security agencies can take action related to this law.
Show Original Legal Text
- (1) Subject to the Security and Intelligence Agencies Act, the National Security Council and the National Intelligence Bureau shall not investigate, arrest or detain persons for noncompliance or breach of conduct under this Act.
The Police Service shall only intervene:
(a) in the event of criminal prosecution under this Act and
(b) where their presence necessary for the enforcement of a sanction under this Act.
Miscellaneous provisions
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Regulations
Plain Language Summary
This provision allows the Minister to create regulations that further define how the law will be implemented. These regulations can cover areas like how internet companies manage content, how algorithms are used, and how different government agencies will work together. The regulations will also address administrative processes and codes of practice to ensure the law is effectively carried out.
Show Original Legal Text
The Minister may, by legislative instrument, make Regulations:
- (a) on specific matters relating to internet intermediaries and content restriction;(b) on specific measures related to algorithm and content moderation;
(c) to prescribe the procedure for collaboration with other public institutions;
(d) to prescribe matters related to administrative decision-making;
(e) on codes of practice;
- (f) generally, on matters for the effective implementation of the Act.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Repeals, amendments and savings
Plain Language Summary
This provision removes certain sections from the Criminal Offences Act and the Electronic Communications Act. It also modifies a section of the Electronic Communications Act to penalize the electronic transmission of misinformation or disinformation that could jeopardize life-saving services or public safety. A person convicted of this offense could face a fine.
Show Original Legal Text
(1) The following provisions are repealed:
- (a) Section 208(1) and (2) of the Criminal Offences Act;(b) Section 74 of the Electronic Communications Act;
The Electronic Communication Act is amended in section 76 by the substitution for subsection (1), of:
'[a] person who by means of electronic communications service, knowingly sends a communication which constitutes misinformation or disinformation under the Misinformation, Disinformation, Hate Speech and Publication of Other Information Act which is likely to prejudice the efficiency of life saving service or to endanger the safety of any person, ship, aircraft, vessel or vehicle commits an offence and is liable on summary conviction to a fine of not more than three thousand penalty units.'.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Transitional provisions
Plain Language Summary
This provision stops all ongoing criminal cases related to Section 208 of the Criminal Offences Act and Section 74 of the Electronic Communications Act once this new law takes effect. It essentially wipes the slate clean of old prosecutions under the previous laws. This ensures a fresh start under the new legal framework.
Show Original Legal Text
- (1) All criminal prosecutions in respect of section 208 of the Criminal Offences Act and section 74 of the Electronic Communications Act shall cease upon the coming into force of this Act.
90. Interpretation
In this Act, unless the context otherwise requires,
- 'aggrieved person' a person whose rights have been infringed under the Act;
- 'Artificial Intelligence' is technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision-making, creativity and autonomy, and for the purpose of misinformation or disinformation includes deepfakes, bots and manipulated algorithms;
'algorithm' means a set of instructions designed to accomplish a task;
- 'an academic' means an individual whether or not on sabbatical leave who:
- (a) was/is employed in a university or research institution and
- (b) was/is of the status of emeritus professor, professor, lecturer, deputy or assistant lecturer including research fellows;
- 'Attorney-General' means Minister responsible for the Ministry of Justice and Attorney-General's Department;
- 'Division' means the Division on Misinformation, Disinformation, Hate Speech and Publication of Other Information established under section 9 of this Act;
- 'Board' means a governing board formed under a statute;
- 'business' means a professional, informal, commercial or industrial activity with the aim of producing goods or a service whether or not profit is realised;
- 'by-election' means an election held in a single constituency to fill a vacate position;
- 'celebrity' means an individual who is widely and publicly known by people in the Republic, and is famous for recognition in entertainment, fashion, modelling, arts, sciences, medicine, architecture, invention, engineering, law, sports, reality shows, business, philanthropy or politics, and does not include an individual who is solely recognised as a public or social media commentator, content creator, digital advertising intermediary or influencer;
- 'child' means a person under the age of 18 years;
- 'civil society organisations' means non-governmental, non-profit entities with humanitarian objectives and includes human rights organisations, professional associations, charitable organisations, faith-based foundations, community groups and non-governmental organisations.
- 'civil wrong' means an act or omission which gives to rise to a civil cause of action;
- 'Cabinet' means the President, Vice-President and Ministers of State;
- 'Code of Ethics' means other instruments for assessing liability of hate speech;
- 'common law' means as established by article 11(2) of the Constitution, rules of law generally known as the common law, the rules generally known as the doctrines of equity and the rules of customary law including those determined by the Superior Court of Judicature;
- 'communicate' means to publish a statement or a material in the Republic;
- 'communication means' means publication of a statement or material in the Republic;
- 'communication service' means a communication service provided by a communication network under the National Communication Division Act;
- 'Complaint' means a formal allegation of wrong doing under the Act directly affecting an aggrieved person and seeking a sanction or remedy under the Act;
- 'computing resource service' means a service that provides the use of any computer hardware or software to enhance the processing capability or storage capacity of a computer;
- 'Constitution' means the 1992 Constitution of the Republic;
- 'content creator' means an individual publicly known for professionally (part-time, as a freelancer or full-time) creating, producing and distributing original content on mass media for personal branding, business marketing, entertainment or education whether monetised or not, including a vlogger or blogger or an individual who holds him or herself out as a content creator;
- 'content' means visual and verbal information communicated or published on mass media and intended for public consumption, and includes texts, news, reports, documentaries, photos, songs, videos, films, movies, skits, parodies, satires, talk shows, editorials, public announcements;
- 'content moderation' means the process of reviewing third-party content generated on online locations to ensure it meets certain internationally accepted standards and guidelines;
- 'Court' refers to the High Court or other appellate court with jurisdiction over the matter;
'covered entity means:
- (a) constitutional bodies;
- (b) Legislative and Judiciary;
- (c) Ministries, Departments, Agencies and local authorities;
- (d) statutory bodies
- (e) the autonomous agencies; and
- (f) the public service;
- 'crime' means a criminal offence under the laws of the Republic;
- 'customary international law' means international obligations arising from consistent conduct of States (State practice) and belief that they are acting in accordance with a legal norm or that it is legally required ( opinion juris) ;
- 'decision of the Division' means an imposition of a sanction or a grant of a remedy;
- 'digital advertising intermediary' means any person who, in the ordinary course of business, facilitates the communication of paid content in any place by acting as the link or part of the link between the owners or operators of online locations, advertisers and service providers by means of internet-based service;
- 'diplomatic channels' means other than mutual legal assistance, correspondence with foreign ministries, diplomatic missions or international organisations;
- 'diplomatic interests' means economic, cultural and political interests of the Republic in relation to other countries;
- 'Direction' means a Correction Direction, a Stop Communication Direction or a Removal of Communication Direction;
- 'dissemination' means the act of spreading communication after initial communication;
- 'Division' means a division of the Authority under the National Communication Division;
- 'due diligence' means investigating and confirming the veracity of information;
- 'Electoral Commission' means the Electoral Commission of Ghana responsible for public elections in Ghana;
- 'employee' means, whether or not written contract exists and whether or not regularised, an individual who is appointed or hired permanently or for a specific period to perform a service for another individual or entity for compensation whether on a continuous, part-time, temporary or casual basis, and who is under the control and direction of that individual or entity;
- 'employer' means a person who appoints or hires an employee;
- 'enforceable rights' means a right or claim or cause of action under this Act that can be enforced against a person or a group of people before the Division or Court;
- 'entity' means an organisation, institution, company, establishment, partnership whether incorporated or not and includes the Government;
- 'existing law' means written and unwritten laws of the Republic of the Ghana as they existed immediately before the coming into force of this Act;
- 'fact' means information that be verified as true or false or inaccurate, and does not include opinions or interpretations;
- 'fact-checking' means the process of verifying the truth or factual accuracy of a statement or material with or without the assistance of an instrument;
- 'family and friends' mean a group of people who are closely connected to an individual though blood ties or strong personal relationships;
- 'freelancer' means a person who is not employed by another person but earns compensation for executing assignments from different persons.
- 'friendly relations' means international relations that promote world peace and security;
- 'funds' means the Consolidated Fund, the Contingency Fund, funds provided by the Authority and Parliament and any other fund established by or under an Act of Parliament;
- ' Gazette' means official publication of legal notice by the Ghana Publishing Division;
- 'general election' means presidential and parliamentary elections in the Republic held every 4 years since 1992;
- 'Ghanaian' means a citizen of Ghana;
- 'Governing Board' means the governing board of the Authority established under section 6 of the National Communications Authority Act;
- 'government official' means senior members of the executive, including the President, Vice-President, Ministers of State, senior presidential staffers, including members of Boards;
- 'Government' means any Division by which the executive Division of the Republic is exercised including the Office of the President and Ministries;
- 'group of persons' means a collective number of individuals and/or entities;
- 'guardian ad litem ' means a person who acts the representative of a child who is an offending party;
- 'Guidelines' means Guidelines on Hate Speech and other forms of Indecent Expression issued by the National Peace Council;
- 'harm' means injury caused by a statement of an intention to inflict pain, injury, damage or other hostile action or to cause fear of harm or caused by violence;
- 'inaccurate information' means information that is incorrect or incomplete by reason of an omission or misstatement, and unless otherwise provided includes false information, misinformation and disinformation;
'individual' means a single human being distinct from a group;
- 'infectious disease' means diseases are caused by pathogenic microorganisms, such as bacteria, viruses, parasites or fungi; the diseases can be spread, directly or indirectly, from one person to another.
- 'influencer' means an individual with mass media presence who has the ability to engage their audience and affect marketing power, behaviours or purchasing decisions through regular posts, comments, endorsements or collaborations because of their knowledge, Division, position or relationship with their audience;
- 'information' means communication of a statement or material, regardless of the form or medium which informs or suggests anything or scenario to a person;
- 'international human rights standards' means internationally recognised legal rights and restrictions outlined in treaties ratified by the Republic, declarations, interpretations and guidelines, and customary international law;
- 'international organisation' means an entity established under treaty or international law and possessing legal personality under international law;
- 'internet access service provider' means an internet service provider licensed by the Authority;
'internet intermediary service' means:
- (a) a service of transmitting such materials to end-users on or through the internet; or
- (b) a service that allows end-users to access materials originating from third parties on or through the internet;
- (c) a service of displaying, to an end-user who uses the service to make an online search, an index of search results, each of which links that end-user to content hosted or stored at a location which is separate from the location of the index of search results, but excludes any act done for the purpose of, or that is incidental to, the provision of:
- (d) a service of giving the public access to the internet;
- (e) a computing resource service;
Examples of internet intermediary services are; social networking services; search engine services; content aggregation services; internet-based messaging services; and video-sharing services;
'issuing party' means a person making a Complainant on behalf of an aggrieved party;
'institution' means an establishment, organisation, agency, department or body;
- 'instrument' means anything adapted to perform a function and includes computer programmes generally and computer programmes altered to perform automated functions;
- 'journalist' means a person, whether appointed as an employee or worker, whose work is to collect, prepare and or distribute real news through mass media or a person who is recognised as a journalist in the Republic;
'Judiciary' means the judicial service of Republic;
- 'law suit' means any legal action against a person whether before a court of law or quasi-judicial body;
- 'mass media' means channels of public communication, storage and sharing of information and includes newsletters, newspapers, pamphlets, magazines, radio, movies, television, books, blogs, webcast, email and social media;
- 'material' means anything that consists of or contains a statement;
- 'media house' means an entity whether licensed or not, engaged in the business of gathering, creating, producing, distributing and managing news, entertainment and content and communicating to the public through mass media;
- 'Member of Parliament' means an individual elected in a general or by-election to represent a constituency in the Republic whether or not that seat is contested in a court of law;
- 'Minister' means the Minister responsible for Communications;
- 'Minister of State' means a person appointed to a high-office of the executive by the President for the administration of the Republic including a Deputy Minister;
- 'Ministry' means a principal decision-making body of the executive branch that exercises executive Division and implements policies on behalf of the Government and is headed by a Minister of State;
- 'Ministerial Directive' means a directive or instruction of the Minister under this Act;
- 'MMS' means a system that enables the transmission, through a mobile network, of multimedia messages;
- 'multinational companies' means a company that operates in more than one country or State;
- 'mutual legal assistance' means a process by which countries seek and provide assistance to other countries in the servicing of official documents and gathering evidence for investigating and prosecuting criminal cases;
- 'National Intelligence Bureau' means the internal intelligence agency of the Republic under sections 12 and 14 of the Security and Intelligence Agencies Act;
- 'national security' means anything relating to sovereignty, territorial integrity, constitutional order, terrorism, organised crime, espionage and cyber threat;
- 'National Security Council' means National Security Council established under article 83 of the Constitution and section 1 of the Securities and Intelligence Agencies Act;
- 'news agency' an entity whether licensed or not, engaged in the business of gathering, creating, producing, distributing and managing news and communicating it to the public through mass media;
- 'next friend' means a person who acts as the representative of a child who is an aggrieved person;
- 'Office of the President' means the seat of the executive, including Office of the VicePresident and presidential staff appointed under the Presidential Office Act
- 'Office of the Vice-President' means the seat of the executive responsible for carrying out the functions of the Vice-President;
- 'office' means specific job or position held by a public officer or governmental official
- 'officer' means person of Division in entity or a person who holds an executive position;
- 'officers'
- 'official duty' means responsibility imposed on governmental official or public officer in accordance with the law;
- 'online account' means an account created with an internet intermediary for the use of an internet intermediary service;
- 'online location' means any website, webpage, chatroom or forum, or any other thing that is hosted on a computer and can be seen, heard or otherwise perceived by means of the internet;
'opinion' means a judgement, viewpoint, feeling or belief about someone or something;
'Order means' Access Blocking Order or Cease and Desist Order;
- 'other information' means the unjustified public disclosure of private facts or the publication of confidential matters concerning the Republic;
'paid content' means any statement that is communicated for consideration;
- 'Parliament' means the Parliament of the Republic, and also referred to as Legislature in this Act;
'people' means more than one individual or entity;
'person' means an individual or entity;
- 'Police Service' means the Police Service of Ghana established under article 200 of the Constitution;
- 'political party' means a free association or organisation of persons, one of whose objects may be to bring about the election of its candidates to public office or to strive for power by the electoral process and by this means to control or influence the actions of government, registered under the Political Parties Act;
'politician' means:
- (a) an individual who is a high-ranking member of a political party that is not in Government;
- (b) an individual who is seeking political office an elected government official; and (c) a Member of Parliament;.
- 'pre-election processes' means procedures involved in organisation including voter registration, nomination of candidates and campaigning;
'President' means President of the Republic;
- 'presidential staff' means individuals appointed by the President to work within the Office of the President and Vice-President to carry out executive functions
'Authority' means the National Communications Division;
- 'print media' includes newspapers, magazines, catalogues, calendars, reports, books, brochures and any print publication;
- 'private individual' means an individual that is a not government official or public office or closely associated with the Government;
- 'private institution' means an entity that operates independently of government control, whether or not a public institution has a share interest in the institution, and regardless of whether it provides public services;
'private person' means an individual or entity;
- 'public benefit' means any positive impact on a large number of people in the Republic;
- 'public corporation' means a body corporate established under an Act of Parliament in accordance with article 192 of the Constitution;
'public finances' means money, expenditure, capital, debt relating to the Republic;
- 'public health crisis' means a health emergency that affects the public, including natural disasters, outbreaks, epidemics, environmental hazards, bioterrorism, chemical exposure, zoonotic disease transmission and mental health emergencies.
- 'public health' means anything relating to the protection and improvement of the health of people in the Republic through prevention, research, education, detection, policy development, cure and promotion of healthy living styles;
- 'public institution' means a covered entity, a state-owned enterprise, a public corporation or a public service entity and excludes the Government, Office of the President and Ministries;
- 'public morals' means anything relating to shared social and ethical standards in the Republic for the time being;
- 'public office' includes an office whose emoluments are paid directly from the Consolidated Fund or directly out of moneys provided by Parliament and an office in a public corporation established entirely out of public funds or moneys provided by Parliament;
- 'public officer' includes the holder of a public office and a person appointed to act in that office;
- 'public or social media commentator' means an individual who for whatever intended purposes, is known by a group of people for regularly sharing opinions, analysis or reactions, commentary on mass media trends, events and issues, articles, news, politics, sociology, law, business, economics, public health, medicine or any specialised field of study;
- 'public order' means anything relating to public peace, public safety and the functioning of a place in the Republic conducive for living and for the enjoyment of rights under the Constitution;
- 'public rights' means rights or claims under the Act that benefits the common interest of the public even it personally affects the aggrieved person or issuing party and is which right or claim is also determined by the type of sanction or remedy that is sought.
- 'public safety' means the anything relating to the protection of people in the Republic from events that cause violence, threat of harm, harm or injury or damage to property;
- 'public service entity' means an entity funded by tax revenue of the Republic which provides public services;
- 'public services' means a community-based service that is typically provided by the Government but which may be provided by private persons and includes services such as education, medical, healthcare, public health, sanitation, research, public safety, transportation, social services, housing and urban development, utilities and environmental protection;
- 'public trust' means confidence that the people in the Republic have in a person to act honestly, fairly and transparently;
- 'public welfare' means the general well-being of the people in the Republic, including social, economic and psychological wellbeing.
- 'publication' means distributing a statement, material or content to the public;
- 'referendum' means referendum under the Constitution;
- 'Regulations' means legislative instrument in respect of the Act;
- 'relevant authorities' means authorities in charge of regulating that industry of sector;
- 'remedy' means a decision that that is intended to cure, correct or prevent unlawful conduct;
- 'Report' means an informal allegation of wrong doing under the Act intended to draw the Division's attention to the act or mission;
- 'Republic' means the sovereign State of Ghana including its territories;
- 'republish' means to publish again, reprint, reissue, reposting, co-publish, or repeat and for the avoidance of doubt includes 'retweeting' on X;
'resident' means a person issued a resident permit by the Ghana Immigration Service or:
- (a) that person has been present in this country for an aggregate period of not less than 183 days in any 12-month period, regardless of temporary absences; and
- (b) has adopted living in the country for settled purposes as part of regular activities.
- 'respondent' means an offending party person representing the offending party who responds to a Complaint.
- 'sanction' means a decision that is intended to discourage unlawful behaviour and includes civil and criminal penalties;
- 'SMS' means a system that enables the transmission, through a mobile network, of text messages;
- 'social media networking service' means service related to social media;
- 'social media' means communication platforms through the internet that allow people to create and share information through text, video, photos and other content and includes dating sites and platforms such as Facebook, X, WhatsApp, Snapchat, Tiktok, Instagram and other similar platforms;
- 'state-owned enterprise' means an entity whether incorporated or not under the Companies Act, 2019 (Act 992) whose shares are wholly or substantially held or controlled by the Government;
'State' means the Republic;
- 'State Party' means any country that has ratified, accepted or acceded to a treaty;
- 'statement' means any word (including abbreviation and initial), number, image (moving or otherwise), sound, symbol or other representation, Artificial Intelligence generated information, or a combination of any of these;
'statutory board' a Board established under statute;
- 'statutory law' means an Act of Parliament or any other subsidiary legislation;
- 'Superior Court of Judicature' means the High Court of the Republic, the Court of Appeal of the Republic and the Supreme Court of the Republic;
- 'territory' means area, including land, air space or water under the control or jurisdiction of a State;
- 'threat of harm' means a statement of an intention to inflict pain, injury, damage or other hostile action or to cause fear of harm;
- 'Unit' means a sub-division of the Division for administrative and enforcement purposes;
'Vice-President' means Vice-President of the Republic;
- 'violence' means an intentional or unintentional use of physical force or power, threatened or actual, against another person that either results in or has a real likelihood in injury, death or psychological harm; and
- 'worker' means a person who is engaged as an independent contract or provides services to a person who is not his or her employer and includes a freelancer.
Note: This text is extracted from the PDF and may contain formatting errors or inaccuracies. For full accuracy, please refer to the official PDF document.
Take Action
Your Voice Matters
Public submissions are being accepted until 14 November 2025
Download draft bill and Public Comment Declaration Form from Ministry website. Submit completed forms via provided email addresses. Also available via Google Docs for commenting.