Coded Predator Phrases Like 'MAP' That Every Parent Should Recognise
Coded Predator Phrases Like 'MAP' Parents Must Know

Coded Predator Phrases Like 'MAP' That Every Parent Should Recognise

Adults frequently encounter these concerning terms only after something alarming has already occurred, while children are exposed to them much earlier through their digital interactions. Sharlette A. Kellum from The Conversation highlights this critical issue in a piece originally published on Monday 6th April 2026.

Fact-checking organisations such as Snopes have repeatedly addressed the term MAP due to its frequent appearance without proper explanation across various online platforms.

A Parent's Disturbing Discovery

When checking her ten-year-old daughter's TikTok messages in early February 2026, a researcher expected to find typical content like dance challenges, school jokes, and anime clips. Instead, she discovered a stranger asking, "Do you like children?" Her daughter responded, "I'm not a MAP."

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Upon inquiring what MAP meant, the child explained it stands for "minor-attracted person." This moment revealed something unsettling yet crucial: children are encountering coded language online long before many parents even know such terminology exists.

Why This Research Matters

In broader research examining online harms to children and teenagers, experts investigate how website and app design influences real-world safety outcomes. Forthcoming studies explore how social media platforms, messaging applications, and gaming communities succeed and fail at protecting young people from grooming attempts, unwanted contact, and other forms of online exploitation.

That's why the daughter's response was particularly chilling. Despite months of research into how major digital platforms like TikTok, Instagram, and YouTube shape online safety, the researcher had never encountered the term MAP. However, after only two months of chatting on TikTok, her child had.

The Terms Parents Should Know

MAP appears in some academic literature related to child protection policy and sexual exploitation prevention, and in online spaces such as forums, Reddit communities, and niche social media groups. Yet it remains unfamiliar to many parents and caregivers.

MAP exists within a wider ecosystem of euphemisms and coded references. Recognising these terms early can help parents identify potentially dangerous interactions and understand when someone online may be attempting to mask harmful intent. Awareness of this language gives adults a clearer sense of when to step in and support their children's safety on social media.

Parents and their children may see or hear these terms on popular apps and sites like TikTok, YouTube, Instagram, Discord, and Reddit. These terms include:

  • NOMAP/Non-offending MAP and Anti contact MAP: Labels used by people who identify as minor attracted and claim they do not act on their attraction to children but still seek legitimacy or community.
  • 764, or 7 6 4: A numerical code used in certain forums, including niche Reddit threads and specialised message boards, to signal attraction to minors without using explicit language.
  • Age of Attraction, or AOA: A term used by MAPs to relay their age preference – typically starting at 11 years old.
  • Adult-Minor Sexual Contact, or AMSC: A term used by people who believe children should have sexual autonomy and can decide whether they want to engage in sexual activity with an adult – a position widely rejected by child protection experts.
  • Adult Friend and Young Friend, or AF/YF: Identifies people that are in MAP relationships.

Why Children Encounter This Language First

Children and teenagers spend substantial amounts of time online. A 2025 Pew Research Center survey found that roughly one in five U.S. teens say they are on platforms such as TikTok and YouTube almost constantly, with YouTube, TikTok, Instagram, and Snapchat among the most widely used platforms.

Young people are remarkably good at picking up meaning from context. They notice tone, repetition, and how others react. They may not fully understand where a term originated, but they understand how it functions socially, meaning what it signals, when it's a joke, and when it's a warning.

Pickt after-article banner — collaborative shopping lists app with family illustration

Journalists and linguists describe this phenomenon as algospeak: language shaped by algorithmic moderation rather than clarity or transparency. Adults, by contrast, often encounter these terms only after something alarming happens. By then, the language may already feel normalised to children.

How Harmful Interactions Slip Past Moderation

Most major social media platforms rely heavily on automated moderation systems. These systems are effective at catching explicit words or previously flagged phrases. Research and reporting show that when moderation falls behind evolving terminology, harmful interactions – especially those involving adults initiating contact with children or teens – often follow a predictable progression:

  1. Euphemistic Language: People use euphemisms instead of explicit terms. "MAP" is less likely to trigger moderation or be flagged for removal than the word "paedophile" it often replaces.
  2. Numerical and Symbolic Codes: People often use numbers or emojis to communicate their meaning indirectly. Codes like "764" or certain emoji combinations can signal meaning without using recognisable words.
  3. Embedded Terminology: Some people embed terms in memes, jokes, or ironic commentary. This makes harmful language appear harmless or funny.
  4. Aesthetic Camouflage: Other people use anime avatars, pastel colour schemes, or cute usernames to appear harmless or youth-friendly.
  5. Private Message Shifts: Adults may move conversations to private messages. Initial contact often happens in public comments, but the real conversation shifts to private direct messages, or DMs.
  6. Backup Accounts: Finally, another warning sign is when people online create backup accounts. When one account is flagged, another appears quickly.

Proactive Parental Education Strategies

Most online safety advice is reactive: adults are encouraged to respond after a term appears or after a child feels uncomfortable. Research increasingly shows that effective protection often begins earlier, with parents helping children understand how digital environments work. Studies on youth digital literacy suggest that children benefit from understanding that algorithms reward attention, repetition, and engagement rather than safety.

Knowing that the app thinks you like something if you stop and watch it helps young users see content as something pushed toward them, not something they sought out. Some families introduce general conversations about coded language early during late elementary or early middle school. Discussing why people use euphemisms online prepares children to pause and ask questions when unfamiliar terms appear.

Research on parental mediation also finds that rehearsed responses help children disengage from uncomfortable interactions. Simple scripts such as "I don't want to talk about that," "I'm blocking you," or "I'm logging off now" can help reduce hesitation. Parents spending time with their kids as they interact with others on apps and websites – not to police them but to interpret what they are seeing – can also help children and teens learn how to analyse digital behaviour the same way they analyse peer pressure offline.

Studies also show that children and teenagers who understand they don't owe strangers politeness, personal details, or continued conversation are less vulnerable to manipulation. Awareness, not alarm, is a powerful tool for families navigating online spaces where harmful language and intent are often hidden in plain sight. When adults stay engaged and proactive, children are better equipped to recognise when something feels wrong and to talk about it with the people they trust.

Sharlette A. Kellum is an associate professor of criminal justice at Texas Southern University. This article was first published by The Conversation and is republished under a Creative Commons licence.