top of page
Search

Subliminal Messaging: Regulating Algorithmic Influences

  • shrutisinghioffici3
  • Aug 11
  • 9 min read

Updated: Aug 21

Subliminal messaging refers to the practice of conveying messages that exist below the threshold of conscious awareness. Thus, Subliminal messaging– a tool of advertisement– has been recognised as the underlying mechanism for influencing the behaviours of consumers. Scholars and developers alike have recognised the use of subliminal messaging in designing algorithms of social media intermediaries like Google, Facebook, Instagram, YouTube etc., and identified its impacts, often adverse, on the behaviours of its users. Not only the content circulated on these platforms, but the algorithm of these platforms itself, plays a role in influencing behaviours and beliefs by directing specific content at its users based on their state of mind, which is determined from the data collected. It is imperative that patterns and practices among platforms on the internet are regulated. 


The latest advisory dated 7th June 2025 of the Department of Consumer Affairs (DoCA) has directed online platforms in India to conduct an internal audit to the effect of eliminating dark patterns on the web within 3 months. Union Minister Prahlad Joshi has also expressed intentions of forming a joint working group with stakeholders to determine such “dark patterns”. While the minister– Mr. Joshi has highlighted the tendencies of such platforms to make consumers feel trapped as an example of dark pattern, there is much left in this regard to uncover and address. 


While India has been addressing the use of dark patterns in advertisement, Social Media intermediaries like Meta, YouTube, and Instagram continue to be regulated under the Information Technology Act, 2002. With the nature and scale of impact that these intermediaries have on the society and especially on impressionable youth, it is pertinent that these platforms are held to the highest standards of operation.


Policy steps for regulating such advertisements and content on platforms include awareness drives like Jagriti Dashboard, and Jago Grahak Jago; in addition to efforts towards curating and implementing several pertinent legislations like Guidelines for Prevention and Regulation of Dark Patterns, 2023 (“Guidelines of 2023”) notified by the CCPA, the Digital Data Protection Act, 2023 (“DPDPA”), Guidelines for Prevention of Misleading Advertisements and Endorsements for Misleading Advertisements 2022 (“Guidelines of 2022”), and E-Commerce Rules of 2020. While these legislations and guidelines address advertisements on social media, they have not been drafted with the intentions of regulating the algorithms of social media intermediaries which, as stated, continue to be regulated by the IT Act. 


The Supreme Court’s directions in the 2022 case of Indian Medical Association vs. Union of India pertaining to misleading advertisements, is a crucial step in regulating advertisements on social media platforms. The Supreme Court issued directives to hold influencers, including celebrities, liable for their advertisements, particularly regarding their expertise in the products they endorse and the credibility of their advertisements. The directives also included the requirement to display a self-declaration on the lines envisioned under Rule 7 of Cable Television Network Rules, 1994 and further directed print/press and internet media to upload such self-declarations on the portal of the Ministry for any advertisements on such platforms. 


Prior to the implementation of the Guidelines of 2022, the Advertisement Standards Council of India (ASCI) introduced guidelines to safeguard the interests of consumers and prevent unethical advertisements. However, these lacked authority, due to which the subsequent Guidelines of 2022 were implemented. 


Subsequently, in 2023, the Central Consumer Protection Authority (“CCPA”) notified the Guidelines of 2023 which listed and defined 12 patterns – false urgency, basket sneaking, confirm shaming, forced action, subscription trap, interface interference, bait and switch, drip pricing, disguised advertisement, nagging, trick question, and rogue malwares. 


Additionally, while the Guidelines of 2022 and the Guidelines of 2023 include most online platforms, the applicability of these Guidelines to platforms which do not directly offer products or services like Google, Meta, YouTube, Snapchat etc. but which advertise products and services and utilise user data collected for targeted advertisements. Persuasive advertisements using subliminal messaging are among the popular forms of advertisements used by social media intermediaries mentioned above. Notably, the language of the Guidelines of 2023 listing and defining ‘dark patterns’ fall short in addressing the use of persuasive patterns of advertisements, like subliminal messaging– which are, admittedly, the business model of most, if not all, of the social media intermediaries.


For instance, Meta has been allegedly using content posted on its platforms with particular political leaning and directing it towards its users with said inclinations based on data collected from such users. Until the real political and societal implications of such targeted content are considered, such advertisement practices may seem harmless. Some such allegations of social media intermediaries’ influence in societies include propagating violent protests in some countries to manipulating elections in others. Examples include Hongkong and Myanmar(where platforms propagated violent movements), and its influence at the elections in the Philippines (where Facebook– now Meta, was infamously and admittedly responsible for the outcome of the 2016 elections in Philippines). 


The 2016 Presidential elections in the Philippines which were infamously influenced heavily by Facebook. A study regarding the effect of Instagram and Facebook on the 2020 US Presidential Elections was undertaken by researchers at Facebook in collaboration with academics using the data and algorithms at Facebook. The study “precisely estimated” the effects of deactivation of Facebook and Instagram on affective and issue polarisation, perceived legitimacy of election, candidate favourability, and voter turnout to be “close to zero”. Similarly, studies undertaken by other scholars as well as Facebook itself disprove the contribution of any manipulation of elections by Facebook. However, the infamous scandal of Cambridge Analytica is alleged to have contributed significantly, using legal and illegal means, to influence elections across the world. Studies suggest that about 20% of younger people say that social media influences their opinion about political issues. Therefore, a mere possibility of happenstance of an event occurring whereby social media platforms are used to influence and manipulate beliefs of people across the world is an urgent cause for concern. 


On a similar note, Meta is facing a lawsuit before the FTC claiming that the User Interface of the platform is deliberately designed to make users addicted to scrolling. The issue was highlighted in 2021 when a whistleblower shared documents from Facebook regarding internal research. According to the whistleblower’s testimony, also noted in the suit, the data indicated that Instagram worsens suicidal thoughts and eating disorders among teenage girls. In similar cases, Google has paid 170 USD in settlements towards government and state claims for YouTube’s illegal data collection from users under 13 years of age. 


Thus, there is an urgent need to regulate all intermediaries with access to large amounts of personal data of their users, and thus, the means to influence and manipulate large sections of society. 


Shoshana Zubanoff, in her book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power has referred to the new economic models in advertising. These models are based on scale, scope, and action-driven demands; and do not only ensure revenue from such marketing patterns but also guarantee profits and novelty in businesses. Zubanoff claims that such a guarantee is only possible by not only sharing ‘behavioural surplus’ with advertisers but by actively building behavioral patterns. Thus, in such circumstances, where social media intermediaries not only influence behaviours and beliefs, but actively form them; it is important to define the boundaries of patterns and models in the algorithm of online platforms which intentionally and actively influence and building such beliefs and behaviours. The urgency of this necessity is fueled by the transferable consequences of the influences of such online platforms in the civil society. 


The Guidelines of 2025 apply to ‘all platforms which offer systematic goods/services in India’, advertisers and sellers. Furthermore, the Guidelines of 2022 define ‘advertiser’ as a person who ‘designs, produces and publishes advertisements either by his own effort or by entrusting it to others in order to promote the sale of his goods, products or services and includes a manufacturer and service provider of such goods, products and services’.


Advertisements have been defined in the Consumer Protection Act, 2019 as “audio or visual publicity, representation, endorsement or pronouncement made by means of light, sound, smoke, gas, print, electronic media, internet or website and includes any notice, circular, label, wrapper, invoice or such other documents”. 


Given such definitions, all of the content existing on social media, notwithstanding the object and intent of such said content, may be considered as advertisements, and the platforms themselves may be considered as advertisers. Therefore, following such corollary, content on social media would be regulated as advertisements. As stated before, the Indian framework for regulating advertisements is rapidly evolving. 


There is thus, a demonstrable need for uniformity in regulating all platforms, including social media intermediaries, for their use of data collected from users. Furthermore, it is pertinent to also ensure the effective execution of regulations and ensuring effective monitoring and identification of patterns which may be venturing into harmful patterns of subliminal messaging and persuasive technologies. 


While the existing framework provides mechanisms for the regulation of platforms, there is little to no clarity regarding the implementation of the laws and regulations being imposed on such platforms. Furthermore, the standard of evidence for proving compliance remains undefined und formally undocumented, leaving a wide room for these platforms for interpretation and skirting of implementational frameworks.


The European Union (“EU”), has regulated the use of subliminal messaging in commercial advertisements, causing economic and financial harm to consumers. In the latest European Union Artificial Intelligence Act, 2025 (“EU AI Act”), the EU has recognised and categorised subliminal messaging, except in medically necessary circumstances, as prohibited practices. The UK too recognised and banned subliminal messaging in 1991. However, there have been no such regulations in India with regard to the use of subliminal messaging in advertisements. 


The guidelines issued by the International Chambers of Commerce (“ICC”) have been the standard for laws and policies for regulating advertisements. The 2018 ICC Advertising and Marketing Communications Code (“ICC Code”) provides for the processing of personal data, as well as regulation of commercial sharing of personal data, while providing for specific safeguards for vulnerable populations like children. The Code provides for the data collected to be used only for purposes contracted for and ancillary to, not ‘excessive’ in relation to such said purposes, and provides for data collected to be accurate and preserved for not longer than required. The measures suggested provide for the requirement of ‘atleast’ one level of security measures for data shared with third parties.


The ICC Code provides a blueprint for global standards of advertisements and thus, consolidates cross-border standards to that extent, which is valuable to the global nature of online platforms and thus, the modern-day advertisement patterns and problems. The Code recognizes and defines ‘interest-based advertisement’ (“IBA”); and in recognising such pactices, provides for regulation of tracking, and data security– especially of children whose consent must be monitored through parental controls. 


While the Code recognizes practices of persuasive advertising, there have been considerable advances in its understanding since the Code was made. Thus, it does not address the issue and the nuances that require renewed attention. While the code allows room for interpretation in this respect, the legislation as prevalent in India remains entirely absent.


There is a thus, a demonstrable and pressing need to balance the needs of human users with the promotion of technology, especially in relation to those with the ability to influence human behavior. The vast universe of content available on the internet must be regulated to allow for a conscious engagement with its human users at the basic level. 


While the available regulatory frameworks allow for an end goal approach, i.e. allow, disallow, or selectively allow for the use of subliminal messaging, it is important for laws and regulators to also assess, understand and identify patterns in algorithms of platforms that are resorting to subconscious targeting of content– which might or might not be in the nature of commercial advertisements. Only by recognising such technical patterns in algorithms can lawmakers begin to effectively establish frameworks for regulating content on such platforms in a fair, equitable and effective manner. 

 

Scholars and policy makers have recognised the need for structured, documented ‘algorithm audits’ for greater accountability and for ensuring that the information collected, processed, and disseminated is done in an ethical, safe, and legal manner, and thus, are working on creating frameworks for conducting such algorithm audits. 


While such auditing requirements are adopted in limited aspects in the local laws of the State of New York in the USA, and under the EU Digital Safety Act (“EU DSA”) in the European Union, there is scope for wider, meticulous adoption of such algorithmic audits for ensuring a safe digital space for everyone. Algorithm audits would help to promote a better understanding of algorithms and their patterns and tendencies among policy makers who would then be able to harness information acquired in such a manner to form better, more efficient policies. 


While algorithm audits have been used in some countries and scholars have attempted to document frameworks for such audits, there is little to no documented law or regulation for undertaking such audits. Thus, given such a lack of documented procedure, there can be no certain statutory credibility assigned to the audits. Thus, for the same reasons, no credible accountability for developers exists to ensure compliance with the law as intended. 


It is therefore pertinent that frameworks and procedures are urgently documented for regulating content on the internet, and to also hold platforms to highest standards of accountability through tools like algorithm audits to understand, check, and thus, effectively regulate their engagement with their human users.



Authored by Shruti Singhi, a Lawyer, Author and Founder at Society for Impact and Policy Research







 
 
 

Recent Posts

See All

Comments


Post: Blog2_Post

Subscribe to our Newsletter

Thanks for submitting!

©2023 by Reviewed. Proudly created with Wix.com

bottom of page