What are recommendations on Facebook?
We make personalized recommendations to the people who use our services to help them discover new communities and content. Both Facebook and Instagram may recommend content, accounts and entities (such as Pages, groups or events) that people do not already follow. Some examples of our recommendations experiences include Pages you may like, "Suggested for you" posts in Feed, People you may know or Groups you should join. Some entities might have limited or no access to features that encourage engagement, and might not be as widely recommended on Facebook as other entities.
Our goal is to make recommendations that are relevant and valuable to each person who sees them. We work towards our goal by personalising recommendations, which means making unique recommendations for each person. For example, if you and another person have Facebook friends in common, we may suggest that person as a potential new friend for you.
What baseline standards does Facebook maintain for its recommendations?
At Facebook, we have guidelines about what content we will recommend to people. Those guidelines fit into a strategy we have used to manage problematic content on Facebook since 2016, called "remove, reduce and inform". This strategy involves removing content that violates our Community Standards, reducing the spread of problematic content that does not violate our standards and informing people with additional information so that they can choose what to click, read or share. Discussion of our "reduce" work on Facebook has often centred on Feed and how we rank posts within it. However, our Recommendations Guidelines are another important tool that we use to reduce the spread of problematic content on our platform.
Through our Recommendations Guidelines, we work to avoid making recommendations that could be low quality, objectionable or particularly sensitive, and we also avoid making recommendations that may be inappropriate for younger viewers. Our Recommendations Guidelines are designed to maintain a higher standard than our Community Standards, because recommended content and connections are from accounts or entities that you haven't chosen to follow. Therefore, not all content allowed on our platform will be eligible for recommendation.
In developing these guidelines, we sought input from 50 leading experts specialising in recommender systems, expression, safety and digital rights. Those consultations are part of our constant efforts to improve these guidelines and provide people with a safe and positive experience when they receive recommendations on our platform.
We want to provide you with more information about the types of content, accounts and entities that we try to avoid recommending, both to keep our community more informed and to provide guidance for content creators about recommendations.
There are five categories of content that are allowed on our platforms, but that may not be eligible for recommendations. These categories are listed below, as are some illustrative examples of content within each category.
Content that impedes our ability to foster a safe community, such as:
- Content that discusses self-harm, suicide or eating disorders, as well as content that depicts or trivialises themes around death or depression. (We remove content that encourages suicide or self-injury, or any graphic imagery.)
- Content that may depict violence, such as people fighting. (We remove graphically violent content.)
- Content that may be sexually explicit or suggestive, such as pictures of people in see-through clothing. (We remove content that contains adult nudity or sexual activity.)
- Content that promotes the use of certain regulated products, such as tobacco or vaping products, adult products and services, or pharmaceutical drugs. (We remove content that attempts to sell or trade most regulated goods.)
- Content shared by any unrecommendable account or entity (e.g. groups or Pages, as outlined below).
Sensitive or low-quality content about health or finance, such as:
- Content that promotes or depicts cosmetic procedures.
- Content containing exaggerated health claims, such as "miracle cures".
- Content attempting to sell products or services based on health-related claims, such as promoting a supplement to help a person lose weight.
- Content that promotes misleading or deceptive business models, such as payday loans or "risk-free" investments.
Content that users broadly tell us they dislike, such as:
- Content that includes clickbait.
- Content that includes engagement bait.
- Content that promotes a contest or giveaway.
- Content that includes links to low-quality or deceptive landing pages or domains, such as landing pages filled with click-through or malicious ads.
Content that is associated with low-quality publishing, such as:
- Unoriginal content that is largely repurposed from another source without adding material value.
- Content from websites that get a disproportionate number of clicks from Facebook versus other places on the web.
- News content that does not include transparent information about authorship or the publisher's editorial staff.
False or misleading content, such as:
- Content including claims that have been found false by independent fact-checkers. (We remove misinformation that could cause physical harm or suppress voting.)
- Vaccine-related misinformation that has been widely debunked by leading global health organisations.
- Content that promotes the use of fraudulent documents, such as someone sharing a post about using a fake ID. (We remove content attempting to sell fraudulent documents, such as medical prescriptions).
As noted above, we take additional steps to avoid recommending certain types of sensitive content to minors on Facebook. For example, we strive to build our systems so as to not recommend content that promotes the sale or use of alcohol to users who are minors.
Account and entity recommendations
We also try to not recommend accounts (including profiles and Page admins) or entities (such as Pages, groups or events) that:
- Recently violated Facebook's Community Standards. This does not include accounts or entities that we otherwise remove from our platforms for violating Facebook's Community Standards.
- Repeatedly and/or recently shared content (including the names or cover photos associated with groups or Pages) we try not to recommend across the categories described in the Content recommendations section above.
- Repeatedly posted vaccine-related misinformation that has been widely debunked by leading global health organisations.
- Repeatedly engaged in misleading practices to build followings, such as purchasing 'likes'.
- Have been banned from running ads on our platforms.
- Recently and repeatedly posted false information as determined by independent third-party fact-checkers or certain expert organisations.
- Are associated with offline movements or organisations that are tied to violence.
We may let people know when they're about to engage with an entity that meets any of the above criteria to help them make informed decisions.
A similar set of these guidelines applies to recommendations on Instagram. Those guidelines can be found in the Instagram Help Centre.