Molly Russell

PFD Report All Responded Ref: 2022-0315
Date of Report 13 October 2022
Coroner Andrew Walker
Coroner Area North London
Response Deadline est. 8 December 2022
All 5 responses received · Deadline: 8 Dec 2022
Sent To
Response Status
Responses 5 of 5
56-Day Deadline 8 Dec 2022
All responses received
About PFD responses

Organisations named in PFD reports must respond within 56 days explaining what actions they are taking.

Source: Courts and Tribunals Judiciary

Coroner’s Concerns
The following matters were raised during the Inquest:-
1. There was no separation between adult and child parts of the platforms or separate platforms for children and adults.
2. There was no age verification when signing up to the on-line platform.
3. That the content was not controlled so as to be age specific.
4. That algorithms were used to provide content together with adverts.
5. That the parent, guardian or carer did not have access, to the material being viewed or any control over that material.
6. That the child's account was not capable of being separately linked to the parent, guardian or carer's account for monitoring. I recommend that consideration is given by the Government to reviewing the provision of internet platforms to children, with reference to harmful on-line content, separate platforms for adults and children, verification of age before joining the platform, provision of age specific content, the use of algorithms to provide content, the use of advertising and parental guardian or carer control including access to material viewed by a child, and retention of material viewed by a child. I recommend that consideration is given to the setting up of an independent regulatory body to monitor on-line platform content with particular regard to the above. I recommend that consideration is given to enacting such legislation as may be necessary to ensure the protection of children from the effects of harmful on-line content and the effective regulation of harmful on-line content. Although regulation would be a matter for Government I can see no reason why the platforms themselves would not wish to give consideration to self-regulation taking into account the matters raised above.
Responses
Twitter International Unlimited Company
13 Oct 2022
Response received
View full response
Dear Senior Coroner,
1. Thank you for your Regulation 28 report to Prevent Future Deaths (the Report) dated 13 October 2022, in which you asked a number of parties, including Twitter International Unlimited Company (formerly Twitter International Company) ('Twitter') to respond to concerns arising following the inquest into the death of Molly Russell. We are grateful to you for affording us an extension of time to provide you with our response.
2. We would like to begin by extending our deepest sympathies to Molly's family and friends for the loss they have suffered.
3. The purpose of this letter is to set out the steps taken, or intended to be taken, by Twitter in respect of the six matters of concern detailed in your Report. As you will be aware, Twitter was not given Interested Person status at the inquest and so in preparing this response we have not been able to consider the evidence made available to the inquest. Notwithstanding, we have carefully considered the recommendations set out in your Report in line with our ongoing commitment to ensuring our platform is a safe space for all users. We have also noted that many of your concerns are currently subject to Parliamentary debate in relation to the draft

Online Safety Bill. Twitter welcomes the enactment of the Bill and is hopeful that it will create an appropriate framework for balancing the complex challenge of content regulation with the benefits of social media, balancing respective freedoms and rights of users fairly.
4. Concern 1: separate platforms for adults and children; and Concern 3: controls to ensure content is age specific
4.1. In accordance with regulatory requirements in the US, UK and Europe, Twitter requires its users to be aged 13 or over. It does not currently have a separate platform for users aged between 13 and 16, or those under 18. Instead, the platform is designed to be a different experience for younger users, while all users are provided with tools to tailor the types of content they are presented with to suit their circumstances. It is worth noting that the average age of a Twitter user is older than other social media platforms. Research carried out by Comscore reported that as of December 2022, 98% of Twitter users are over the age of 18. Notwithstanding, Twitter is designed to be age appropriate for teenagers from age 13 years and up.
4.2. There are a number of challenges to any social media platform in creating a separate platform for teenage users, while the benefits of segregated platforms are not clear. A proportion of teenagers will always discuss their emotions and mental health challenges on social media. Sharing a platform with adults provides an opportunity for supervision and support to be provided to teenagers, in circumstances where teenagers segregated on a platform may not be as well equipped to respond appropriately to such content.
4.3. Rather than segregating platforms, Twitter has designed its platform to provide a different experience for younger users as well as deploying a number of safety features in order to keep all users safe. By way of example:
4.3.1. Age restricted content – Twitter automatically restricts users who are under 18, or who do not include a birth date on their profile, from viewing sensitive media content (as set out in our sensitive media policy)1. In addition, a different approach to advertising is taken for users who are either under 18 or who do not include a birth date on their profile. Twitter prohibits marketing or advertising of a number of products and services to minors, including alcohol, weapons, weight loss products, health supplements, gambling products, sexual products and services, permanent cosmetics and other forms of body branding2. These age restrictions are in addition to complete bans on advertising certain products on Twitter, 1 https://help.twitter.com/en/rules-and-policies/notices-on-twitter 2 https://business.twitter.com/en/help/ads-policies/ads-content-policies/prohibited-content-for-minors.html

including any advertising of controlled substances, tobacco and projectiles.
4.3.2. Safe Search – users of the Twitter platform have control over what they can see in search results through selecting the Safe Search mode. Safe Search is automatically enabled for anyone with a birth date under 18 years of age. Once enabled, these filters are designed to exclude from search results any potentially sensitive content (such as content which is excessively gory, violent, or of a graphic sexual nature)3 along with accounts a user has muted or blocked (for whatever reason).
4.3.3. Sensitive Tweet Warnings – Twitter’s sensitive media policy prohibits users from including graphic content or adult nudity and sexual behaviour within areas that are highly visible on Twitter, including in live video, profile, header, List banner images, or Community cover photos. If a user shares this content on Twitter, the policy requires the user to mark their entire account as sensitive or to add sensitive content warnings to individual photos or videos. Doing so places an interstitial warning message on images or videos they post which contain sensitive media. Twitter may also place an interstitial warning message on some forms of sensitive media. An interstitial warning alerts a user that a Tweet contains sensitive content such as nudity, violence or sexual content and means other users can only see the media if they actively click to "show" the Tweet; it cannot be viewed by accident.
4.3.4. Controlling replies – users can choose who will be able to reply to their Tweets when posted. The default position is that everyone can reply but options are available to turn off all replies or only allow the accounts mentioned in the Tweet to reply. A user can also change who can reply to their Tweets, or turn off replies, after the Tweet has been posted.
4.3.5. Protected accounts – when an adult user signs up for Twitter, they can choose to keep their Tweets public or to protect them so that only approved followers can see and interact with them4. By contrast, when a user signs up for Twitter with a date of birth under 18 years of age, the account is automatically defaulted to protected mode.
4.3.6. Account filters – users can filter the types of accounts they see in their notifications timeline. This feature allows users to mute notifications from certain categories of users, such as those with accounts who have not confirmed their phone number or email address, new accounts, accounts 3 https://help.twitter.com/en/rules-and-policies/media-policy 4 https://help.twitter.com/en/safety-and-security/public-and-protected-tweets

who have a default profile photo, accounts that the user does not follow or accounts that do not follow the user5.
4.3.7. Block and mute – users can block accounts instantly if they do not want that account to see their Tweets and/or the user does not want to see the account's Tweets. Users can also mute an account if they don't want to see their Tweets, but don't want to unfollow the account. Particular words, conversations, phrases, emojis and hashtags can also be muted to ensure those words or phrases do not appear on the user's timeline.
5. Concern 2: age verification when signing up to the platform
5.1. Twitter is committed to protecting child safety online and has launched a range of age assurance measures to seek to ensure that only users aged 13 and over are permitted to access the Twitter platform.
5.2. As previously noted, Twitter requires its users to be at least 13 years old in order to create an account. Twitter approaches the challenge of age assurance by combining self-declaration (i.e. users providing their date of birth) with additional technical measures (as described in the ICO's Age-Appropriate Design Code6) which together aim to ensure that the account holder's self-declared age is genuine and that appropriate controls are in place to protect teenagers.
5.3. Twitter first collects the user’s age through the neutral presentation of a date of birth prompt. Once a date of birth is entered, Twitter then determines the user’s age. At this stage, new users are informed that Twitter uses their age to customise their experience, including advertising, and provides options as to the visibility of the user's date of birth to others.
5.4. Users who enter a date of birth that indicates they are under the age of 13 are not permitted to go any further in the account opening process. There is an account restoration appeals process for those who erroneously enter the wrong date of birth and are not permitted to proceed with account opening or who have their account off-boarded as a result of an indication of being under 13. As part of the account restoration appeals process, the user is required to provide ID documentation proving that they are over the age of 13. These appeals are subject to human review. If Twitter cannot verify the user is over the age of 13, the account is not restored. (This appeals process is often used by business accounts who enter the date of incorporation, rather than children seeking to attempt to gain access to Twitter. Where the account is registered to a legal 5 https://twitter.com/settings/notifications/advanced_filters 6 See in Chapter 3 of the AADC 'How can we establish age with an appropriate level of certainty' available at

vices/

person (i.e. a company), evidence would need to be provided to show that the account is being used for business purposes.)
5.5. Users who enter a date of birth that indicates they are over 13 but under 18 are prevented from seeing sensitive content, such as adult content on any surfaces (e.g. their timeline or search results) in line with Twitter’s sensitive media policy and the automatic application of ‘Safe Search’ for such users. Any sensitive content contained in the account holder's page will be obscured by a sensitivity screen, in line with the policies identified at paragraph 4.3.3 above.
5.6. Users are also able to report accounts which they believe are operated by someone who is underage and Twitter will take action if appropriate.
5.7. In respect of advertising, users who have not registered a date of birth on their profile (for example, because they opened their account before providing a date of birth was required), will be asked to enter their date of birth in order to follow the accounts of certain brands. Twitter prohibits marketing or advertising of a number of products and services to minors, such as alcohol. If the user is a minor, these types of ads will not be served to them, as explained in further detail in paragraph 4.3.1 and 6.2.
5.8. In addition to the measures above, Twitter has been working with experts to research further age assurance measures that incorporate 'privacy by design' principles (required by the GDPR) and work in a global context. These measures also need to account for the importance of online anonymity for minorities and disadvantaged communities around the world and the use of Twitter as a platform for whistle-blowers and human rights advocates.
5.9. There are currently a range of projects which are being actively examined by Twitter with these considerations in mind, focused on the best interests of children.
6. Concern 4: algorithms used to provide content together with adverts
6.1. As you may be aware, the majority of online services use algorithms in some form to suggest relevant content to users, which helps improve the usability and accessibility of online services.
6.2. Twitter uses algorithms to help provide content to users. The main feed on Twitter is sub-divided between a 'Following' tab (which only shows Tweets posted or Re-Tweeted by accounts a user is following) and a 'For You' tab (which suggests more Tweets from accounts and topics a user follows as well as recommended Tweets). Users may also see content such as

Promoted Tweets or Re-Tweets in their timeline7. Neither tab permits sensitive content or inappropriate advertising to be surfaced for users under the age of 18. Twitter's policies and enforcement measures seek to reduce the risk that illegal or potentially harmful content could be shown to users.
6.3. Twitter's Suicide and Self Harm policy prohibits users from promoting or encouraging suicide or self-harm8. If this policy is violated (e.g. the user shares content which intentionally encourages others to harm themselves, asks others to encourage the user to harm themselves or shares detailed information or instructions relating to self-harm or suicide), Twitter actions the content so it is no longer visible publicly and requires the user to remove the content. The user will be unable to Tweet again or interact in any way on the platform until they do so. If a user continues to violate Twitter's Suicide and Self Harm policy, or if an account appears dedicated to promoting or encouraging self-harm or suicide, the account will be permanently suspended. In addition to content removal, Twitter also marks hyperlinks as unsafe; for example, where a link may be seeking to spread instructional material9.
6.4. Twitter's Suicide and Self Harm policy was developed after consulting extensively with experts. The policy does not prevent people who have engaged in self-harm or experienced suicidal thoughts from sharing their personal experiences and using the platform for seeking support. Experts believe that removing posts of this nature risks not only stigmatising mental health challenges but also removes an opportunity for intervention by the friends and family of a user.
6.5. Twitter has also launched a new product called '#ThereIsHelp' in the UK10 . This means a prompt with a link to the Samaritans charity will appear when a user searches for words related to suicide or self-harm. On the mobile app, the mode in which the majority of users access Twitter, the prompt takes up almost half the screen.
6.6. During the last reporting period, there was a substantial increase in the volume of accounts suspended (18% increase), and content removed (23% increase) under Twitter's 'Promoting suicide or self-harm' policy. 408,143 accounts were actioned in total. We attribute this increase to our continued investment in identifying violative content at scale. As a business we are determined to continue improving in this area. To improve transparency, we regularly publish data around Twitter's enforcement of its policies11 . 7 https://help.twitter.com/en/using-twitter/twitter-timeline 8 https://help.twitter.com/en/rules-and-policies/glorifying-self-harm 9 https://help.twitter.com/en/safety-and-security/phishing-spam-and-malware-links 10 https://blog.twitter.com/en_us/topics/company/2018/wspd2018 11 https://transparency.twitter.com/en/reports/rules-enforcement.html#2021-jul-dec

7. Concern 5: Parental access and control over the material being viewed and Concern 6: linking to parental accounts for monitoring
7.1. As previously stated, users under 18 make up a very small minority of all Twitter users in the UK. Notwithstanding this, our Trust and Safety Team is dedicated to advocating for the safety of its users and protecting their rights, and therefore engages with experts to ensure Twitter offers the most appropriate solutions to parents with children using Twitter. In collaboration with Internet Matters (an organisation launched with the specific intention of supporting parents and carers to navigate the digital landscape), Twitter has developed a parental controls guide, which provides step-by-step instructions for parents to manage their child's account12 .
7.2. These instructions allow parents to protect their child's Tweets (as described at paragraph 4.3.5 above) and prevent children from receiving abusive or inappropriate content. It also gives the parent control over who can contact their child and what personal data is shared. The controls also allow parents to limit who can see their child's Tweets, who can contact them and who can tag them.
7.3. As explained above, users can curate the types of content they see to match their interests and hide Tweets that contain sensitive content. In addition, Twitter introduced 'Safety Mode' in September 202113 , which allows users to temporarily block accounts for using potentially harmful language or sending repetitive and uninvited replies or mentions.
8. Concluding remarks
8.1. We hope that this response provides you with a helpful explanation of the steps Twitter has already taken in relation to your concerns. Twitter does not underestimate the challenge in this area. We are committed as an organisation to working with experts, regulators, government and others in the sector to ensure that online services are as safe as they can be for its users and in particular those under the age of 18.
META
6 Dec 2022
Meta has recently introduced and expanded its Family Centre supervision tools, allowing parents to monitor a teenager's activity, set time limits, and see blocked users or privacy setting changes. The organisation also continues to update its Instagram Parents’ Guide and provides educational resources. AI summary
View full response
Dear Coroner Inquest touching upon the death of Molly Russell: Response to Regulation 28 Report to Prevent Future Deaths
1. Meta Platforms Ireland Limited1 (“Meta”) writes in response to the Regulation 28 Report to Prevent Future Deaths (the “Regulation 28 Report”) dated 13 October 2022, made following the inquest into the death of Molly Russell (the “Inquest”). At the outset, we wish to again express our deepest sympathies to Molly Russell’s family and friends for their loss.
2. Meta has carefully considered the evidence given to the Inquest, particularly the evidence given by Mr Russell, and the concerns raised in the Regulation 28 Report. We are committed to providing a positive experience on Instagram, especially for teenagers, and to continually taking steps to develop our policies, tools and technology in consultation with experts. Meta has engaged in the development of the UK Online Safety Bill from the outset, and will continue to do so. We support the Government's focus on suicide and self harm content within the Online Safety Bill, recognising how complex this issue is, and we welcome the Government’s guidance on how to strike the balance between allowing for mental health dialogue and preventing people from seeing content on our platforms which may be sensitive. There is always more to be done in this space, and we will continue to carefully reflect on the views of the Coroner and the Russell family on these difficult issues. 1 Although the Regulation 28 Report was addressed to “Meta Platforms, 1 Hacker Way, Menlo Park, California, CA 94025”, Meta Platforms Ireland Limited is the relevant entity which operates and controls the Instagram service in the UK and which had Interested Person status in the Inquest. The Regulation 28 Report was therefore provided to, and this response is provided by, Meta Platforms Ireland Limited. 1

Matters of concern
3. We note that the Regulation 28 Report refers to six matters of concern in relation to “online sites” and “platforms”. We respond to each of the six matters raised with respect to the Instagram platform. Given the interlinked nature of certain matters, some are addressed jointly below. Separate platforms for children and adults (Concern 1); controlling content so as to be age-specific (Concern 3):
4. Meta’s Terms of Use prohibit people under the age of 13 from using Instagram and our platforms are designed for use by people aged 13 and over. This is in line with legislation and guidance in the US, Europe, and the UK on privacy and data processing, including the UK General Data Protection Regulation.2
5. Providing a safe, positive and inclusive environment for all of the people who use our apps is of paramount importance. We design our policies and services, including our Community Standards and Community Guidelines (hereafter our “Content Policies”) which define what content is and is not permitted on our platforms, with our youngest users in mind. These policies seek to balance freedom of expression alongside other important values, such as safety, privacy and dignity. We work hard to enforce our Content Policies and use a combination of ever-advancing technology, user reports and human reviewers to detect and remove content that violates them. Meta has also implemented Recommendation Guidelines (discussed further below) in conjunction with leading experts, through which we work to avoid recommending content (for example, on the “Explore” surface) that could be sensitive or inappropriate for younger users.
6. While Meta does not currently provide separate platforms for adults and teenagers in the UK, Instagram provides a tailored experience for teenage account holders. As a result, a teenager’s experience on Instagram is different from that of an adult in a number of ways (in addition to the parental controls discussed further below). While we will continue to look for further opportunities to adapt our services to ensure teenage users have a positive and age-appropriate experience on Instagram, some of the most significant differences at present are:
a. Users in the UK and the EU who tell us they are under 18 years old are defaulted into a “private” account when signing up to Instagram. For teenagers already on Instagram, we prompt them to review and update their account privacy settings. Private accounts provide users with greater control over who sees or responds to their content (which can only be seen by users who they allow to follow them). 2 Article 8(1) of the UK GDPR. 2

b. We have introduced the “Sensitive Content Control”, which applies to all surfaces on Instagram where content or accounts are recommended.3 As set out in our Recommendation Guidelines, we work to avoid recommending certain types of content to people. As part of this, the Sensitive Content Control seeks to provide users with some degree of choice over how much non-violating (i.e. does not violate our Content Policies) but potentially sensitive content is displayed to them on these surfaces. The Sensitive Content Control has only two options for teenagers: “Standard” and “Less”. Whereas users aged 18 and over can select to see “More”, we do not allow teenagers to access the less restrictive sensitivity settings. Additionally, teenagers under the age of 16 are defaulted into the “Less” option when signing up to Instagram. For teenagers already on Instagram, we send a prompt encouraging them to select the “Less” experience. This feature seeks to make it even more difficult for young people to come across content which does not violate our Content Policies but which could be sensitive.
c. We already work to limit the ability for users under the age of 18 to view certain categories of content, for example diet products, alcohol, and tobacco (this is called “age-gating”), and we are currently looking at expanding the types of content that we are able to age-gate.
d. We collaborated with experts to develop the “Take a Break” feature to encourage people, particularly teenagers, to make informed decisions about how they are spending their time on Instagram. All Instagram users have the ability to set reminders to take more breaks from using Instagram. These reminders show expert-backed tips to help users to reflect and reset. To make sure that users under the age of 18 are aware of this feature, we show them notifications suggesting they turn these reminders on. This feature builds on our existing “Daily Limit” feature, which allows people to see how much time they are spending on Instagram and set limits for how long they want to spend on Instagram each day. We are currently testing new tools that help teenagers reduce distractions and give them more ways to take time away from Instagram, and we hope to launch these to our community in the UK soon.
e. We have introduced an alternate topic nudge feature for teenagers in a number of countries, including the UK. On Instagram, teenagers are now shown notifications that encourage them to switch to a different topic if they have been dwelling on the same type of content on Explore. We designed this feature based in part on research which suggested that nudges could be effective for helping people, especially teenagers, to be more mindful about how they use social media.
f. We have implemented technology which seeks to limit teenagers under the age of 18 from receiving unwanted contact from adults. The technology identifies adult Instagram accounts which have displayed potentially suspicious behaviour and limits these accounts 3 Instagram has a number of recommendation surfaces including the “Explore” and “Reels” (short videos) tabs, where users may be shown content from accounts that they do not already follow. The purpose of recommending content is to enable those who use our services to discover new communities and content that they might be interested in. 3

from following or interacting with users under the age of 18. We also work to avoid recommending content posted by teenagers’ accounts to potentially suspicious accounts and prevent these accounts from being able to see comments from teenagers on other posts. Further, we do not allow potentially suspicious accounts which search for a specific username belonging to a teenager to then follow that teenager’s account.
g. We work to restrict direct messaging between teenagers and adults by limiting users we identify as adults from sending direct messages to people we have identified as under 18 years old, where the teenager is not already following the adult’s account. As an extra layer of protection, we are currently testing removing the “message” button on teenagers’ Instagram accounts when the accounts are viewed by suspicious adults. Additionally, we prompt teenagers to be more cautious about interactions in direct messages by providing safety notices to this effect.
h. We have developed a number of tools so that teenagers can let us know if something makes them feel uncomfortable while using our apps, and we have recently introduced new notifications that encourage them to use these tools. For example, after a teenager blocks an account, we prompt them to report the account to us.
7. Consistent with our continued efforts to provide age-appropriate services, we have developed the Best Interests of the Child Framework4 to be used during app and feature development. The framework helps us consider, and incorporate into the services we provide, guidance and principles from the Information Commissioner’s Office’s (the “ICO”) Age-Appropriate Design Code (“AADC”), the UN's Convention on the Rights of the Child, and other children’s rights groups. The framework includes six key considerations that our teams can consult to seek to ensure their work is rooted in global best practices and that our services support the well-being and rights of young people. We also recognise that to do this effectively, we must account for a range of different perspectives. We therefore incorporate a variety of views, including from teenagers and their parents and guardians, when designing our apps. An example of this process is the virtual co-design methodology employed in the development of Family Centre and Education Hub. Between December 2021 and October 2022, Meta and the Trust, Transparency and Control (“TTC”) Labs5 conducted co-design sessions with a diverse sample of teenagers and their parents/guardians, alongside consultations with external experts from government, nonprofit organisations and academics to help inform the development process. We will continue to evolve the guiding questions and resources in Meta’s Best Interests of the Child Framework as we learn more through expert consultation, user research and co-design.
8. More broadly, we continue our engagements with experts in this space and our work to implement new tools and features which are designed to help ensure people have a safe and positive 4 https://www.ttclabs.net/news/metas-best-interests-of-the-child-framework 5 TTC Labs is a cross-industry effort initiated and supported by Meta to create innovative design solutions to give people more control over their privacy. 4

experience on our platforms. A recent example is the safety tools we announced in October 2022, which include: (i) allowing an individual, when blocking another user, to select to block other accounts they may have created, making it more difficult for that user to interact with them on Instagram; (ii) “nudging” users by sending them notifications which encourage them to pause and consider their response before replying to a comment that our systems tell us might be sensitive; and (iii) sending users a reminder to be respectful when sending direct messages to people who use creator accounts.6 Age verification when signing up to the online platform (Concern 2):
9. Understanding people's age online remains a complex, industry-wide challenge that requires thoughtful solutions to appropriately balance privacy, effectiveness, and fairness. Many people, particularly teenagers and people from underserved communities, do not have access to formal identification. As an industry, we have to explore novel and equitable ways to approach the dilemma of verifying age online that are not reliant on a form of identification. We have recently been testing new methods to verify age online and we are committed to continuing to work with governments, regulators, experts and others in our industry to develop clear and equitable solutions and guidance for age assurance online.
10. Meta recognises that there is no perfect solution to online age verification and we have therefore sought to develop a multi-layered approach to address this complex issue. Meta’s Terms of Use have always prohibited people under the age of 13 from using Instagram and we have developed a number of methods to help to prevent people under the age of 13 from misrepresenting their age to use our platforms and to ensure those who do meet our minimum age requirement receive the appropriate experience for their age (these methods are summarised below):
a. We require all users to enter their date of birth when they sign up to Instagram and have asked users who signed up prior to age being required in 2018 (for users in the UK and EU) to provide their age in order to continue using Instagram. We implement mechanisms in the user registration process to seek to prevent people under 13 from circumventing age restrictions. For example, if an individual tries to sign up using a date of birth which reflects that they are under 13, they receive a generic error message informing them that they cannot create an account. After two attempts at entering an underage date of birth, the individual is blocked from creating an account for a period of time.
b. As well as seeking to deter people under 13 from creating an account, we also continue to work to improve the mechanisms we have in place to detect and remove underage accounts. Anyone (not just individuals who themselves have an Instagram account) can report suspected underage accounts to Instagram. When we become aware that an account may belong to an individual under the age of 13, we prohibit the user from 6 Creator accounts are a type of professional (rather than personal) Instagram account. 5

accessing their Instagram account until they are able to demonstrate that they meet our minimum age requirement; if a user cannot demonstrate they are 13 or older within 30 days, their account is permanently disabled and removed from the platform. In the last two quarters of 2021, Meta removed 1.7 million accounts on Instagram globally because the users were unable or unwilling to demonstrate that they meet our minimum age requirement.
c. We have invested heavily in artificial intelligence models to help us estimate age. We use this technology to help us identify whether someone is an adult or a teenager and work to tailor their experience accordingly, for example, by restricting teenagers' interaction with potentially suspicious adults (as explained above). We are working to improve the accuracy of this technology and to deploy it in additional use cases as part of our ongoing efforts to provide our users with an age appropriate experience.
d. Meta continues to work to develop accessible, privacy-protective and technology-driven age assurance solutions. This year, we began partnering with online age-verification specialist Yoti to bring new age verification tools to Instagram. Now, when someone attempts to edit their date of birth from under the age of 18 to 18 or over, we require them to verify their age by selecting either to: (i) provide a video “selfie”, with Yoti’s face-based age prediction technology then predicting their age; or (ii) upload their identification documents. We are continuing to explore expanding these tools to new use cases. Algorithms are used to provide content together with adverts (Concern 4):
11. Along with most search engines, news websites, online marketplaces and websites, Meta uses technology, including algorithms,7 in a number of ways, including to help us to remove content that violates our Content Policies and avoid recommending content that is contrary to our Recommendation Guidelines. Meta also uses content-ranking algorithms which aim to identify and show people content they are likely to find the most interesting by ordering the content on a user’s Instagram feed and making recommendations of content and accounts.
12. Content-ranking is almost ubiquitous on the modern internet, due to the sheer volume of content available online and the need for users to be able to sort through and identify the most relevant information. Instagram uses many pieces of information (known as “signals”) to rank content. Safety and security considerations are at the forefront of our decision-making processes at Meta, and we work to ensure we build safety and integrity measures into the algorithms we use.
13. Content which violates our Content Policies is not permitted on Instagram; we work hard to enforce these policies to seek to ensure that this content is not available to be ranked or recommended. Separately, Meta has published its Recommendation Guidelines which express at a high level the 7 An algorithm being a formula or set of steps for solving a problem, and a standard tool used in computer programming. 6

types of content we work to avoid recommending. Our Recommendation Guidelines are designed to set a higher bar than our Content Policies, because recommended content comes from accounts that the user has not chosen to follow. Meta’s algorithms are designed to apply these Recommendation Guidelines such that we avoid making recommendations that may be potentially sensitive, whilst respecting the rights of other users to express themselves by not removing such content from the platform entirely. As explained above, we have recently introduced an alternate topic nudge feature for teenagers that prompts them to switch to a different topic if they have been dwelling on content on the same topic on Explore.
14. We provide a number of mechanisms which enable users to control the content they see on Instagram surfaces. For example, users are able to report or “hide” content from their Instagram, included by unfollowing or “muting”8 accounts. We also made changes to Instagram Feed to provide users with the choice to view a “Favourites” feed (which shows posts from accounts selected by the user as their “favourites”) or a “Following” feed (which shows recent posts from accounts that a user follows). Both options display posts in reverse chronological order (i.e. without content ranking by algorithms).
15. With respect to advertisements, Meta takes extra precautions when providing advertisements to users under the age of 18 and has long restricted the type of advertisements that can be shown to teenagers on our platforms. For example, in the UK advertisers can only target advertisements to people under the age of 18 on the basis of age, gender and location. Moreover, we do not allow advertisements on certain topics such as alcohol, tobacco, weight loss or dating services to be shown to users under the age of 18 in the UK. Parental access to and control over material viewed (Concern 5) and linking of and monitoring of accounts by parents (Concern 6):
16. Meta has wide-ranging parental supervision and support tools in place today, and is committed to continuing to work in consultation with parents, teenagers and experts to seek to provide additional parental oversight and support features over time, and to explore more ways to both foster communication between parents and their teenagers and to support teenagers in having age- appropriate experiences online.
17. We recognise and support the important role that parents and guardians have to play in helping their teenagers navigate social media. We use expert and regulatory guidance to assist us with assessing the appropriate degree of parental supervision of teenagers’ use of social media and how to balance privacy and parental oversight. For example, the ICO’s AADC, which applies to online services likely to be accessed by children, cautions that “children who are subject to persistent parental monitoring may have a diminished sense of their own private space which may affect the development of their sense of their own identity. This is particularly the case as the child 8 If a user selects to “mute” another user on Instagram, they will not see their posts or stories in their Feed or see incoming messages from the muted user. 7

matures and their expectation of privacy increases.” The AADC recommends that online services which provide parental controls should provide children up to 12 years old with materials which explain that their parent is being told “what they do online to help keep them safe”. For teenagers aged 13-15 (described in the AADC as “early teens”) the recommendation changes to suggest that materials be provided to “explain how your service works and the balance between parental and child privacy rights”. In light of this guidance, we consider that it is important for in-app parental supervision tools to reflect the evolving maturity of teenagers and their increasing expectations of privacy as they get older.
18. Meta has accordingly implemented wide-ranging parental tools and resources, including tools which allow parents and guardians to supervise their teenager’s use of Instagram in-app, in addition to monitoring in person or at a device level. In 2022, Meta launched the Family Centre, a centralised place where parents can access supervision tools and information resources from leading experts. Through the Family Centre, once both the parent and teenager have accepted the supervision tools, parents can view the accounts that their teenager follows and the accounts that follow their teenager on Instagram, see the amount of time that their teenager spends on Instagram, set daily time limits on their teenager’s Instagram use, and schedule breaks for specific times of day or night when they do not want their teenager to use Instagram. If a teenager reports another user, they can also share details of this with the supervising parental account. We have recently expanded these supervision tools; new features include the ability for parents to see who their teenager has blocked, if their teenager changes their default privacy settings, and if they have any new connections (i.e. if they have begun following or being followed by any new users).
19. In addition, experts have told us that it is important for parents to have conversations about internet use with their teenagers, and Meta has long endeavoured to provide helpful information and resources to assist those conversations, for example, through the Education Hub (accessible from the Family Centre). This includes, by way of example, the Instagram Parents’ Guide which has been published for several years and which we continue to update in line with current expert guidance, a guide to media literacy with ConnectSafely,9 and a resource for encouraging supportive conversations about mental health produced by the American Foundation for Suicide Prevention. 9 A nonprofit dedicated to educating users of connected technology about safety, privacy and security. 8

Conclusion
20. We hope that this response is helpful in explaining the work Meta is doing related to the concerns raised by the Coroner. This work is ongoing and we will continue to build on and constantly re­ evaluate the approach we take. We look forward to continuing to work with experts, people impacted by these complicated issues, regulators and legislators, including as the Government and Ofcom take forward the Online Safety Bill, so that we can ensure that we best serve the people who use our services. Meta Platforms Ireland Limited
Snap
7 Dec 2022
Snap has updated its Community Guidelines with increased detail on self-harm and suicide content, launched the 'Here For You' in-app mental health resource, and refreshed its Global Safety Advisory Board. The platform also detailed existing protections for under 18s, such as requiring mutual friends for communication and private friend lists. AI summary
View full response
Dear H M Coroner Mr Andrew Walker, Thank you for your initial request for information dated 13th October 2022. We want to first extend our deepest sympathies to Molly’s family for their tragic loss. We know this must continue to be an extremely difficult time for her family and friends. We recognise our responsibility to our community and users of social media more broadly - a responsibility that extends to the entire technology sector. Before we answer the important questions you raised regarding the current safety protections in place on platforms including Snapchat, we wanted to first briefly set out how Snapchat operates, the overall approach we take to moderating suicidal and self-harm content in particular, and the resources we make available to help protect the mental health and well-being of Snapchat users. About Snapchat From the beginning, Snapchat was designed to be different from traditional social media, prioritising the safety, privacy and wellbeing of our community. Unlike other platforms, we don’t open to a feed of algorithmically amplified and unvetted content, which can push users into scrolling endless streams of recommended, unmoderated content. Instead of a feed of other people’s content, Snapchat opens directly to a camera, encouraging users to express themselves. At its heart, Snapchat is a visual messaging application designed to encourage users to interact (either 1:1 or in small groups) with their real friends, meaning people they know in real life. In practice, this means that we do not offer an open news feed where unvetted publishers or individuals have the opportunity to broadcast illegal or harmful content to large groups. Our Discover section, which is the part of the app showing news and entertainment, features media publishers and individual creators. This content is not interspersed with posts from friends. Meanwhile, our Spotlight tab shows the most entertaining photos and videos from within the Snapchat community. Content on Discover and Spotlight is moderated prior to reaching a large audience. 1

Snap Confidential With this approach, which has been in place since Snap’s inception, we are able to help stop illegal and harmful content and activity from being surfaced across the public parts of Snapchat. Our Approach to Enforcing against Content Violations We expressly prohibit accounts and content that promote or encourage self-harm or suicide, alongside prohibiting other illegal and dangerous material. This is stated clearly in our Community Guidelines1 and accompanying Terms of Service2. If content of this nature is identified, human moderators review our user reports and it is promptly removed. We make it easy and accessible for users to confidentially report violating content, activity or concerns to us directly in the app. Reports are swiftly investigated by our dedicated global content moderation team, which operates around the clock. While Snaps may delete by default or after 24 hours, we can preserve content when reported to us, so that we can properly investigate and enforce against violations of our Community Guidelines. Whilst we have always prohibited the promotion, glorification and encouragement of self-harm and suicidal content, to provide additional insight and transparency into our moderation efforts, earlier this year we added a dedicated content category for suicide and self-harm to our bi-annual Transparency Report3. This public report summarises, at both global and country-specific levels, the content and accounts Snap Inc. enforced against on Snapchat across a range of categories including harassment and bullying, hate speech and sexually explicit content. We also include the total number of times our Trust and Safety team has shared self-harm prevention and support resources with users in distress. Supporting Our Community When our Trust and Safety team reviews a user report and believes that a member of our community may be in distress, we forward self-harm prevention and support resources directly to that user, and escalate the matter to law enforcement in cases of imminent threat to life. The resources we share are publicly available to all Snapchatters and published online4. For example, in March 2020, we expedited the launch of ‘Here For You’ in the UK - a dedicated portal within Snapchat, created in partnership with The Samaritans and The Diana Award, which shares resources when Snapchatters search for certain themes related to mental health, anxiety, depression, stress, suicidal thoughts, grief and bullying. We also launched “Safety Snapshot” last year, a dedicated channel available in the Discover section of our app that aims to provide easily digestible tips for users on staying safe and reporting content. This can be accessed by searching “Safety Snapshot” in the Discover tab. 1 https://snap.com/en-GB/community-guidelines 2 https://snap.com/en-GB/terms 3 https://snap.com/en-GB/privacy/transparency 4 https://support.snapchat.com/en-GB/a/Snapchat-Safety 2

Snap Confidential This summer, we introduced a new in-app tool called Family Centre, which offers parents, carers and other trusted adults insight into who their teens are Friends with and which Friends they recently sent Snaps and Chats to on Snapchat, without revealing the contents of the teens’ messages. With this approach, Snap has sought to balance parents’ needs for more information with teens’ needs for privacy, autonomy and growing independence. Through these tools and resources, we aim to start meaningful conversations amongst parents, carers and teens about online risks, how to stay safe and how to find support if they need it. We hope this initial overview provides you with a sense of how Snapchat works and our overarching approach to content moderation and support for our community. In the following section, we have responded to your specific questions regarding the individual features that you mentioned within your report. Please note that we have grouped some questions together in our response. (Question 1) There was no separation between adult and child parts of the platform or separate platforms for children and adults and (Question 3) That the content was not controlled so as to be age-specific. With regard to your first and third points, to confirm, Snapchat does not currently separate between adult and teen parts of the platform nor do we have a separate platform for children (under 13s are forbidden from having an account on Snapchat) and adults. We absolutely recognise the importance of ensuring the content and experience on Snapchat is age-appropriate for the user. We have extra protections in place for our community who are below the age of 18 and, as detailed earlier in our response, provide a range of tools of Support Resources for our community - in particular, those who may be vulnerable. ● Content: ○ Content published in the public facing areas of the app, as detailed earlier in our response, must abide by our Community Guidelines, as well as separate and additional publisher guidelines for Discover publishers, with all content included on the app being suitable for an audience aged 13 and above. ○ We have existing safety-by-design features overlaid on top - for example, we do not enable public comments on Discover so as to limit the ability for illegal or harmful content in the comments to go viral and be surfaced to a large number of people. ○ In addition, we apply age controls to the Spotlight section of the app which blocks comments from users over the age of 18 on Spotlight content which has been posted by users aged between 13 and 17. 3

Snap Confidential ● Profiles: ○ By default, a teen must be friends with another user before being able to communicate directly. ○ There are no browsable public profiles for under 18s. ○ Friend lists are not public. ○ We limit the size of group chats and they are not discoverable unless you are in the group or have shared a direct link to the profile. When a user encounters content that they believe is inappropriate or harmful, they can report it easily and quickly. We provide easy to use and accessible reporting mechanisms for content through our Support Website5 and Safety Centre6, and our in-app reporting tool, which Snapchat users can use to report concerns. Even if a person is not logged in or registered on Snapchat, they can still report on our support sites. (Question 2) There was no age verification when signing up to the online platform. Our Approach to Age Verification We are deeply committed to ensuring children under the age of 13 are not able to access Snapchat and we approach this in the following ways. ● Age verification at sign-up: At the point of sign-up, new users are required to provide their date of birth when they register. When a potential user enters a date of birth below the age of 13 during the registration process, the process fails. We do not inform individuals that their registration failed due to their age and, on the web, we set a cookie to discourage repeated registration attempts. If we later become aware that a Snapchat user is under the age of 13, we terminate that user’s account and delete the user’s data. There are also other measures we can take, such as blocking their device. ● Strict guidelines in our approach to marketing the app: Snap does not market Snapchat to children. It is not available in the “Kids” or “Family” sections of any app store. Snapchat is rated 12+ in the Apple app store and rated Teen in the Google Play store, putting parents on notice that Snapchat is not designed for children. These ratings reflect Snapchat’s content, which is designed for teens and adults, and not children under 13 years of age. Long Term Solutions to the Challenge of Age Assurance We are committed to continuing our work with government, regulators and industry partners to identify genuinely robust, scalable and proportionate industry-wide, long-term age-assuring 5 https://support.snapchat.com/en-GB 6 https://snap.com/en-GB/safety/safety-center 4

Snap Confidential approaches that could be internationally applicable, to further limit the ability for underage users to access apps. It is well recognised by government, regulators and industry that age assurance of young people is complex, with ongoing sincere concerns shared by us and other stakeholders around privacy, bias and inaccuracy. We remain committed to finding the most effective approaches whilst also protecting the privacy and data safeguards that are integral to the trust and safety of our community. We believe that in the short to medium term, the key to developing workable solutions in this area is to capture the widest possible community of stakeholders by focusing on the components of the age verification process that have the greatest potential for impact in addressing this. Interaction with either one of the two app stores is a key gateway through which all users must pass before they can install apps on their phones. The two app stores are run by the two major operating system providers - Apple and Google. Introducing the two companies’ comprehensive family suites of safety and wellbeing tools - age-gates, screen time limiters, downtime setting, monitoring app downloads and in-app purchases, white/black lists, etc - when signing up to the app stores would identify any underage users who somehow fell through earlier (and unavoidable) entry points. We believe this to be the most viable opportunity for a robust, comprehensive and industry-wide age verification system to be developed and located. All the more so given the existence in both stores of credit-card-based verification for parents and carers. Improving those existing gate-keeping mechanisms by which users already select and access the majority of their apps would be a more effective, and scalable, tool to ensure children are only accessing apps which are both age-appropriate and acceptable to their parents or carers. Expanding this thinking to a more holistic approach would also allocate responsibilities to other stakeholders in the value chain. Access to, and use of, applications require the user to pass through at least two technology “layers” before reaching the app store and operating system: the mobile operator’s data network and the hardware. Children, by and large, do not buy their own phones. At the point of purchase, the purchaser (usually, the parent or carer) could be guided through the options to configure, in an age-appropriate manner, the phone’s safety parameters using the operating system tools provided, including linking to a family account with age-verification options controlled by a parent or carer for younger children. Similarly, in general, children do not sign up or pay for mobile data subscriptions. At the point of purchase, small changes to the purchase flow could be designed so that the purchaser would 5

Snap Confidential be guided through the options to configure, in an age-appropriate manner, both the phone’s safety parameters using the operating system tools, as well as the mobile network operators’ own tools, such as age-gates, white/black lists and parental filters. (Question 4) That algorithms were used to provide content together with adverts. Unlike traditional social media platforms, we don’t have a feed of unvetted or unmoderated public content. Whilst we do have some algorithms operating on content on Discover and Spotlight (the public areas of our app), the moderated and curated nature of this section means that we already have tight limits over what is being surfaced. As such, we believe our core architecture and design decisions which prioritise safety limit the risk of algorithms that are operating on our platform. We place a high value on transparency, especially on how our platform works. Our Support Page7 provides additional information on how we rank content on Spotlight. We also have a Support Page8 on ranking content on Discover. It is important to note that content on Discover, the other public facing area of the app, comes from feature content from verified media publishers, such as Teen Vogue and the Economist, and content creators. (Question 5) That the parent, guardian or carer did not have access, to the material being viewed or any control over that material AND (Question 6) That the child’s account was not capable of being separately linked to the parent, guardian or carer’s account for monitoring Overall, we recognise Snapchat plays a central role in our community’s life and for many young people, it’s where their most trusted and important relationships live. It’s a responsibility we take incredibly seriously. We also recognise that for many parents who haven’t grown up with the platform, Snapchat is less familiar. That’s why earlier this year, we introduced Family Centre.9 Family Centre is an in-app tool which gives parents the ability to know who their teenage children are friends with on Snapchat and which Friends they have recently sent Chats and Snaps, while still respecting young people’s desire for some level of autonomy and privacy. This tool was developed in close collaboration with families to understand the needs of parents, carers, trusted adults and teenagers, as well as global experts in online safety and wellbeing. Family Centre allows parents to see their teen’s friend list (which is private for under 18s on the app), in addition to who they have been communicating with over the last seven days. In the coming months, we will add additional features to Family Centre, including new content controls for parents and the ability for teens to notify their parents when they report an account or a piece of content to us. This is in recognition of the fact that, whilst we closely moderate and 7 https://support.snapchat.com/en-GB/a/how-we-rank-content-spotlight 8 https://support.snapchat.com/en-GB/a/how-we-rank-content-discover 9 https://snap.com/en-GB/safety-and-impact/post/family-center 6

Snap Confidential curate both our content and entertainment platforms and don’t allow unvetted content to reach a large audience on Snapchat, each family has different views on what content may be appropriate for their teens. We, therefore, want to give them the option to make those personal decisions based on, among other things, the teen’s age, maturity level and the family’s values. Conclusion The safety and wellbeing of our community is of utmost priority and we remain committed to our continuous work to help keep Snapchat safe. We are deeply sorry for the tragic loss that Molly’s family and friends have suffered and we hope this response provides a full picture of the ongoing efforts within Snap to address the industry-wide concerns you shared in your report. To recap, this includes: ● Introducing a range of new resources to help Snapchatters manage their mental health, safety and well-being, including ‘Here For You’ and our ‘Safety Snapshot’ Discover channel. ● Adding suicide and self-harm content as a stand-alone category in our bi-annual Transparency Report, as a way of providing additional insight and transparency into our moderation efforts on this important subject. ● A continued commitment to age-assuring solutions. We are continuing to work, globally, with government, regulators and industry partners to identify proportionate, innovative and long-term age-assuring solutions. This is an evolving landscape with emerging technologies and approaches developing which we are constantly monitoring with a view to find a long term solution. ● Introducing Snapchat's Family Centre - a tool designed to offer parents, carers and other trusted adults insight into their teens' Friends and which Friends they have recently sent private messages on the app, while at the same time protecting teens' privacy, autonomy and growing independence. Parents or carers can view their teens' friends' lists, see who they communicated with in the last seven days and report to Snap accounts that may be of concern to them. Additional features are planned for release in the coming months. ● Pre-moderated public content on Snapchat. ○ Across our app, we limit opportunities for potentially harmful content to ‘go viral’. ○ All content on Spotlight and Discover is pre-moderated, making it a safer experience. Our content platform, Discover, only features content from approved media publishers and content creators and Spotlight is moderated using automated review for all content and human review before any content can reach a large audience 7

Snap Confidential ● Our recently refreshed and expanded Global Safety Advisory Board - led by Head of Global Platform Safety Jacqueline Beauchere MBE, this group brings together leading safety experts, including three UK members (of an 18-strong global board), to educate, challenge, raise issues and advise Snap on how to keep the Snapchat community safe. Our experts bring a wealth of experience, including those who specialise in combating bullying, teenage mental ill health and related risks. ● Extra protections for under 18s: ○ By default, teens have to be mutual friends on Snapchat before they can start communicating with each other. ○ Friend lists are private, and we don’t allow users under the age of 18 to have public profiles. ○ And we have protections in place to make it harder for strangers to find teens. For example, teens only show up as a "suggested friend" or in search results in limited instances, like if they have three mutual friends in common. In response to your original report, relating to action taken or proposed to be taken, we hope the information detailed throughout our response explains how our model is different to other open newsfeed platforms. We recognise that we have an ongoing responsibility to proactively support our community when they are vulnerable, which is why we have introduced a number of additional Support Resources, including measures for under 18s, which we have included in this response. Once again, we would like to extend our deepest sympathies to Molly’s loved ones for their tragic loss. If you have any additional questions with regards to Snapchat, please do not hesitate to respond to me.
Department for Digital Culture Media Sport
8 Dec 2022
The government plans to enact the Online Safety Bill, which will require platforms to prevent children from accessing harmful content, implement age-verification, and provide tools like parental controls. Amendments to the Bill will require large platforms to publish risk assessments and name the Children’s Commissioner as a statutory consultee for Ofcom. AI summary
View full response
Dear Rebecca,

Thank you for providing a copy of your Regulation 28 Report dated 13 October, issued following the Inquest into the death of Molly Rose Russell.

I understand that you will share a copy of this response with Molly’s family, and I would first like to express my sincere condolences for their loss. Every death is tragic but incredibly so when it involves a young person. This case outlines exactly why holding platforms to account for harmful content and activity online is so important.

You have made a number of recommendations for the government to consider regarding the provision of online services to children. You have recommended that the government considers enacting legislation to ensure the protection of children from the effects of harmful online content. You have also recommended that consideration is given to the setting up of an independent regulatory body to monitor online platform content, with particular regard to the following specific concerns from the Inquest:

1. That there was no separation between adult and child parts of the platforms or separate platforms for children and adults.
2. That there was no age verification when signing up to the online platform.
3. That the content was not controlled so as to be age specific.
4. That algorithms were used to provide content together with adverts.
5. That the parent, guardian or carer did not have access to the material being viewed or any control over that material.
6. That the child's account was not capable of being separately linked to the parent, guardian or carer's account for monitoring.

Finally, you have suggested that platforms themselves could give consideration to self- regulation taking into account the matters raised above. I will address these concerns in turn.

The government is committed to introducing the strongest possible protections for children online. The Online Safety Bill (the Bill) was introduced to Parliament on 17 March and this groundbreaking piece of legislation will deliver the government’s manifesto commitment of making the UK the safest place in the world to be online. The Bill will make technology providers accountable to an independent regulator to keep their users, particularly children, safe online. The government is committed to ensuring the legislation is in place in a timely fashion, however, it’s important to note that the Bill may change during its Parliamentary passage, with its final form and approval being the responsibility of Parliament.

The Bill will apply to providers of services which host user-generated content or facilitate user- to-user interactions, including the services used by Molly Russell, as well as to search services. All providers in scope will need to take robust action to address illegal content and criminal behaviour on their service. Assisting suicide has been named as a priority offence under the Bill, meaning that providers will be required to take proactive steps to prevent users from being exposed to this content and behaviour, and swiftly remove it if it is uploaded to the service. Beyond the priority offences, all providers will need to ensure that they have effective systems and processes in place to quickly take down other illegal content or behaviour once it has been reported or they become aware of its presence.

The government has recently announced that it will bring forward a new offence to address communications that promote self-harm. All companies in scope will therefore need to tackle this content under the illegal content safety duties and the individuals posting such content will be criminally liable. The government is in the process of drafting the new offence. Separate legislation will be introduced when Parliamentary time allows to cover anyone who physically assists someone to self-harm, for example, by providing them with an instrument to cut themselves.

The strongest protections in the Bill are for children. As well as protecting children from illegal material, providers of services which are likely to be accessed by children will also have to assess the risks their service poses to children from harmful or age inappropriate content and activity, and apply safety measures to protect their child users. The government will set out the priority categories of harmful material to children in secondary legislation.

The Bill will be overseen and enforced by Ofcom. As the independent regulator, Ofcom will set out in codes of practice the steps that providers can take to comply with their duties. Ofcom will also have a range of enforcement powers, which will include substantial fines and, where appropriate, business disruption measures (including blocking). There will also be a criminal offence for senior managers who fail to ensure their company complies with Ofcom’s information requests to push strong compliance in this area.

Separation of Children and Adults on Online Services and Age Verification Turning to the first two specific areas of concern you have raised, the Bill sets out clear duties to ensure children are only able to access content that is appropriate for their age group. The Bill will require providers to ensure that children are not able to access services, or parts of services, that pose the highest risk of harm, including those hosting age-inappropriate or harmful material for children. For services which are only appropriate for certain age groups, providers will likewise need to take steps to ensure that only children who are old enough are able to access the service. The Bill in general is technology-neutral in order to ensure it does not become outdated in future, and so does not mandate the use of specific technologies such as age-assurance or age verification. However, age-assurance and age verification are clearly referenced on the face of the Bill as measures which may need to be used by providers in order to meet their duties. Ofcom may also recommend other effective measures in its codes of practice. Where children are able to use their service, providers will also need to provide other age-appropriate protections for children. This includes protecting children from harmful content and activity and reviewing children’s use of higher risk features, such as live streaming or private messaging.

The government has also recently announced that it will strengthen the Bill’s protections for children, to make it even more explicit that providers of services with age restrictions will have to ensure that only users who are old enough are able to access their service. These providers will now need to explain in their terms of service the measures they use to enforce age restrictions, such as the use of age assurance or age verification technologies. This will prevent

providers saying their service is, for example, for users aged 13+/16+ in their terms of service, and doing nothing to prevent younger children accessing it.

Age Specific Content Controls On your third area of concern, the Bill will require providers of services likely to be accessed by children to put in place age appropriate protections for children from harmful content and activity. User-to-user services, including social media platforms, will have a responsibility to prevent all children from accessing content that is designated as ‘primary priority’ content that is harmful to children on their service, and to protect children in age groups which are judged to be at risk from other ‘priority’ content. Search services will have similar duties to minimise the risk of children encountering harmful content in search results. This will have the effect of requiring providers to consider whether content is safe for specific user age groups.

On 7 July, the government published a Written Ministerial Statement setting out the categories it expects to be designated as primary priority content and priority harmful content to children. Content promoting self-harm and legal suicide content are among the proposed categories of primary priority content that is harmful to children, which means providers will need to take robust steps to prevent children of all ages from encountering this content on their service. Providers will also have an overarching duty to identify any other content which meets the definition of harm to children in the Bill as part of their risk assessment, and protect children in age groups at risk from this content. We also expect providers to consider measures such as signposting children to sources of support, where they are actively searching for harmful content. Ofcom will set out details of these measures in their codes of practice.

Use of Algorithms and Advertising On your fourth area of concern, the Bill will require providers to specifically consider, as part of their risk assessments, how algorithms could impact children’s exposure to illegal content and content which is harmful to children on their service. Providers will need to take steps to mitigate and effectively manage any risks, and consider the design of functionalities, algorithms and other features to meet the illegal content and child safety duties. Ofcom will also have a range of powers at its disposal to help it assess whether providers are fulfilling their duties including the power to require information from providers about the operation of their algorithms. Ofcom will be able to hold senior tech executives criminally liable if they fail to ensure their company provides Ofcom with the information requested. Furthermore, advertising content where it is indistinguishable from other user-generated content, for example influencers advertising products through their user-generated content posts, will be subject to the strong illegal and child safety duties in the Bill. Ahead of the Bill’s implementation, we expect providers to be transparent about design practices which encourage extended engagement, and to engage with researchers to understand the impact of these practices on their users, in particular children. We also welcome voluntary efforts from industry to develop tools to help children and families understand and manage how much time children spend online.

In addition to the Bill, the Online Advertising Programme is considering how advertising regulation should be modernised for the digital age and is reviewing the spectrum of harms caused by paid-for online advertising. It will look at the role of all parties in the supply chain, including intermediaries, services and publishers not currently covered by regulation, to provide a holistic review of the regulatory framework. The government consulted publicly on its proposals for the Online Advertising Programme earlier this year. We will publish a response to the consultation in due course.

Parent, Guardian or Carer Access, Control and Monitoring With regards to your fifth and sixth areas of concern, Ofcom will set out the steps that providers can take to comply with the child safety duties in codes of practice and, where proportionate, this could include the use of parental controls or linked accounts for children of certain age

groups. The Bill will also require providers to enable “affected persons”, which could include children or their parents, guardians or carers, to report harmful content to the service.

The government has also announced that it will make changes to the Bill to strengthen the protections for children. The Bill be amended to require the largest platforms to publish summaries of their risk assessments for illegal content and material that is harmful to children, to allow users and empower parents to clearly understand the risks presented by these services and the approach platforms are taking to children’s safety. Moreover, we are naming the Children’s Commissioner as a statutory consultee for Ofcom in its development of the codes of practice, ensuring that Ofcom considers the experience of children and young people in its delivery of the codes.

Finally, with regards to self-regulation ahead of legislation, the government agrees that providers should be taking proactive steps now to improve safety online, particularly for children, and not wait for the legislation to come into force before acting. The government has published resources to support providers to take voluntary action to improve safety for their users, especially children. In June 2021, we published ‘Principles of safer online platform design’ guidance and a “One-Stop Shop” for child online safety on GOV.UK. These are resources which give practical guidance for providers on what they can do to design safer services and further increase children’s safety online ahead of the new regulatory framework.

Thank you again for bringing your concerns to my attention. I trust that this response provides assurance that the appropriate action is being taken.
Pinterest
8 Dec 2022
Pinterest has updated its self-harm policy to ensure stricter enforcement by removing references to self-harm or suicide in artwork, memes, or jokes for all users. The company also plans to develop methods to further limit depressive content distribution to teens and partner with a third-party content checking service by the end of 2023. AI summary
View full response
Dear Senior Coroner Regulation 28 Report concerning Molly Russell Thank you for your Prevention of Future Deaths report dated 13 October 2022 in which you asked Pinterest, amongst others, to provide a response following the Inquest into the death of Molly Russell. This response is provided by Pinterest Europe Limited, a designated Interested Person in the Inquest. We provide our response to your report after attending and giving evidence at the Inquest and carefully considering your conclusion and six concerns (which we address below). In response to your report, we wish to highlight that Pinterest is committed to taking the following actions and plans to actively work to implement these changes by the end of 2023:
1. To develop ways to further limit the distribution of depressive content on Pinterest to teens. Molly’s case has reinforced that depressive content merits careful treatment. We will develop and test automated signals to understand how best to limit the distribution of depressive content to teens on Pinterest - for example - not showing “more like this” prompts if a teen views a Pin that may be depressive. In addition, we will work to continue ensuring that we do not send notifications containing depressive content to Pinterest users (who we call “Pinners”) and ensure that we do not recommend searches for depressive quotes as autocompletes or “ideas you may love” to any Pinners either.
2. To update our self-harm policy to ensure stricter enforcement, starting with removing certain content for all Pinners, rather than limiting its distribution. Specifically, we have updated our polices to remove references to self-harm or suicide in artwork, memes, or jokes.
3. To partner with a third party content checking service with the aim of providing independent testing of our progress in our moderation efforts with respect to self-harm and suicide content on Pinterest.
4. To consult with mental health experts to ensure that we are delivering the best possible resources to Pinners who search for self-harm or suicide related content. 1

5. To continue to work through the challenges of age assurance with experts, legislators, and the rest of the market. We also acknowledge and welcome the changing regulatory landscape with respect to content moderation and user safety online. We will take the voluntary actions above in addition to preparing for upcoming legislative changes in this area, both in the UK and beyond. Introduction and background to Pinterest We set out below our response to your report after attending and giving evidence at the Inquest and carefully considering your conclusion and six concerns. By way of background, Pinterest is a visual inspiration platform used by over 400 million people worldwide to discover and save ideas. People typically come to Pinterest to find inspiration for recipes to try, travel ideas, fashion and beauty looks, home and style products to buy, and more. Pinners save these ideas when they discover them on the platform or Internet to ‘Boards’ which they create and maintain on their ‘profile’. Ideas saved onto Boards are called ‘Pins’. Many relate to subjects such as fashion, cooking, style, travel and home decor, but other topics such as well-being or self-help issues are also available. As our users, our Pinners, save and share images and links they find on Pinterest or the Internet, the content of Pins available on the platform varies enormously, and can include content that is prohibited by our Community guidelines (until that content is either reported to and / or discovered by us). We take content moderation seriously, and have worked with external experts to ensure that our policies have detailed guidance on what is considered ‘helpful’ versus ‘harmful’, and how to navigate that distinction. Our aim is that these policies keep Pinterest an inspirational space for all of its users. Our core value is to Put Pinners First. We carefully listened to all of the evidence during the Inquest and Molly's story has reinforced our commitment to making ongoing improvements to help ensure that our platform is a positive and safe space for all Pinners, including teenagers. We want Pinterest to be a place for inspiration and we know that our policies, practices and technologies must always evolve to create a safer and more positive corner of the Internet. We remain committed to listening, learning and engaging in the global conversation between platforms, regulators and civil society about online safety. We believe it is critical for platforms to collectively tackle illegal content, and we hope that the Online Safety Bill achieves a system which has the safety of users at its core. This Prevention of Future Deaths report, and Molly's case more broadly, are critical elements in that ongoing discussion. We combine human moderation with automated machine learning technologies to reduce policy-violating content on the platform. We continue to review, iterate and update our moderation processes as expert guidance and machine learning technologies evolve, and welcome this report as a critical step in that process. We are committed to taking the 5 specific actions outlined above in response to your report. Those actions, which will be implemented by the end of 2023, will be taken in addition to monitoring any regulatory or compliance actions required by changes to the law in this area. These actions will also be taken in addition to the key steps we already take to specifically protect users between the ages of 13 and 17 (“teens”) on its platform in the UK, as explained in more detail below. 2

Your Concerns
1. There was no separation between adults and children on the same platform or no separate platforms for adults and children
2. The content was not controlled so as to be age specific
3. There was no age verification on registration.
4. Algorithms were used to provide content together with adverts.
5. That the parent, guardian or carer did not have access to the material being viewed and did not have any control over that material.
6. That the child's account was not capable of being separately linked to a parent, guardian or carer's account for monitoring. As highlighted and described more fully below, Pinterest commits to the following actions in response to these concerns.
1. We will develop and test tools to further limit the distribution of sad or depressive content on Pinterest to teens. We do not allow anyone under the age of 13 to create a Pinterest account. For users aged 13 and over, we seek to ensure user safety, regardless of the age of the user. As a platform dedicated to positivity, Pinterest is committed to putting the interests of Pinners, including those between 13 and 17, first when designing and developing products that they might access. As such, the content available on Pinterest to users aged 13 to 17 does not differ to the content available to users aged 18 and over (although UK users aged 13 to 17 will not see paid targeted advertising on Pinterest, and will be surfaced separate, age-appropriate information, e.g. about their privacy settings). Pinterest aspires to be a positive place on the internet and we take a strong approach to prohibiting content that does not fit with our mission to bring everyone the inspiration to create a life they love. Since not all content is inspiring, we have Community guidelines that outline the types of content we do not allow on Pinterest. Pinterest is not a place for hateful content, misinformation, violence, or for the people and groups that spread such content. We have industry-leading policies, including comprehensive policies covering Hateful Activities, Misinformation, Dangerous Actors, Graphic Violence and many more types of harmful content, and we have dedicated reporting options for users to report such content to us. For example, Pinterest prohibits weight loss ads, climate misinformation, child sexual exploitation, illegal drugs, and adult content, including pornography. Our aim is that these policies keep Pinterest safe for all of its users. With that said, we know we can always improve. Our policies, practices and technologies must always evolve to keep up with new behaviours, trends and technological advances. To date, we have taken various actions to strengthen how we combat policy-violating content on our platform, which have led to significant improvements. For example:  We continue to use and improve automated machine learning as a moderation tool to reduce the volume of policy-violating content on our platform. 3

 We block search results for terms that violate our policies, including terms associated with self-harm, suicide, drug abuse, and eating disorders, and display an advisory that connects users with resources if they or someone they know are struggling.  We stop content from certain websites dedicated to spreading harmful content from being saved to Pinterest.  We have implemented dedicated reporting options for users to report policy-violating content to us.  We keep our policies under review and update them against guidance from external expert organisations.  We put in place additional measures to help protect Pinners, including those aged between 13 and 17 (for example, additional privacy measures, which are set out in further detail below).  We partner with external organisations and participate in industry-wide groups to increase awareness, share knowledge and develop industry best practices.  We support the creation of a safer and more positive experience online and actively engage with legislators globally (including in the UK regarding the Online Safety Bill) in the effort to create a safer Internet. As additional commitments, we will develop and test tools to further limit the distribution of depressive content on Pinterest to teens. Molly’s case has highlighted that this issue merits careful treatment. More specifically, we will develop and test automated signals to understand how best to limit the distribution of depressive content to teens on Pinterest ­ for example - not showing “more like this” prompts if a teen views a Pin that may be depressive. In addition, we will work to continue ensuring that we do not send email notifications containing depressive content to Pinners and ensure that we do not recommend searches for depressive quotes as autocompletes or “ideas you may love” to any Pinners either.
2. We will update our self-harm policy to ensure stricter enforcement. In addition to the content moderation changes noted above, we have made additional changes to our self-harm policy to ensure stricter enforcement of certain categories of content. We already remove anything that is considered encouraging of self-harm or mocking or bullying. As an additional commitment, we have expanded this policy to also remove, rather than limiting distribution, references to self-harm or suicide in artwork, memes, or jokes.
3. We will partner with a third party content checking service with the aim of providing independent testing of our progress in our moderation efforts with respect to self- harm and suicide content on Pinterest. With respect to algorithms, our approach is to focus on robust content moderation policies to ensure that, as far as possible, policy-violating content is not available to be distributed on Pinterest (algorithmically or otherwise). However, it still makes its way onto our platform. To moderate it, we take a hybrid approach, employing both automated tools and manual review to take action against this content. More specifically: 4

 In relation to policy-violating Pins, when our content moderation practices (automated, manual or hybrid) either remove or limit the distribution of such Pins on the platform, Pinterest's algorithms will not identify or recommend those Pins to individuals via search, homefeed, or recommendations. We also undertake additional ad hoc sweeping clean-up efforts. For example, during the first half of 2022 these efforts led to the deactivation of approximately 15,000 Boards containing a total of approximately 2.4m Pins. Separately, as part of the same process, we deactivated approximately 843,000 further Pins.  We maintain a voluminous Sensitive Terms List which contains a number of blocked search terms, meaning that if a teen searches for the word 'suicide' or similar, it will not return any search results and instead will provide a list of professional helpline resources to contact. Autocomplete searches in the search toolbar are also blocked in relation to blocked terms e.g. users who partially type out the word suicide will not be autoprompted to search for the word 'suicide'. We constantly update this list (including in response to changes in usage), and at the time of writing there were over 50,000 terms on the list. Although we have maintained robust efforts in these areas, we know we can always improve. As an additional commitment, we are taking a comprehensive review of the groups we partner with to get additional advice and feedback on our policy and enforcement approaches to self-harm with the plan to expand our partnerships in this area. In conjunction with this expanded outreach, we plan to partner with a third party content checking service with the aim of providing independent testing of our progress in moderation efforts with respect to self-harm and suicide content on Pinterest.
4. We will consult with mental health experts to ensure that we are delivering the best possible resources to Pinners who search for self-harm or suicide related content. We are committed to ensuring that resources for parents remain relevant and useful and are kept up-to-date in light of changes in product functionality. We are aware that other, larger platforms have recently started to introduce enhanced functionality in this area; we are actively considering best practices and will continue this work in 2023. To help Pinners better understand their privacy choices, we have published a Help Centre article that offers users various privacy resources using language that can be easily understood by typical 13-17 year olds. When a 13-17 year old user registers for a Pinterest account, a prominent pop-up notice containing a link to this article is presented. In addition to consolidating privacy resources for users, we have also published a Help Centre article for parents of teens on Pinterest. This article explains our minimum age requirements, provides Pinterest privacy resources, and specifies ways for parents to notify us if they suspect their underage child has a Pinterest account. We also participate in a number of partnerships and programmes in order to develop and implement industry best practices.  We are part of Samaritan's Online Excellence Programme, a three-year industry-wide programme to promote consistently high standards across the sector in relation to self-harm and suicide content. The programme includes a research and insight programme, industry guidelines to support sites and platforms in managing self-harm and suicide content online using safe and sensitive approaches, an online harms advisory service and a hub of online safety resources. 5

 We are a member of the Digital Trust & Safety Partnership, which brings together a number of leading technology companies who are committed to developing industry best practices and providing objective and measurable third-party assessments of members’ trust and safety practices. The Partnership engages with consumer and user advocates, policymakers, law enforcement, relevant NGOs and various industry-wide experts.  We regularly engage, individually and with other midsize platforms, in stakeholder discussions around key legislative developments in this area, including making submissions to the UK Government during its Online Harms White Paper consultation. We share the UK government's commitment to addressing online safety because we want Pinterest to be an inspiring and welcoming place for everyone. We also agree with the UK government that 'online safety is a shared responsibility between companies, the government and users.' We believe it is important for platforms to collectively tackle illegal content and prevent it from simply moving between platforms. We hope that the Online Safety Bill in the UK and Ofcom, as the proposed independent regulator, achieve a system which has user safety and risk management at its heart. Cooperation between platforms in achieving online safety is critical in our view, as a greater degree of inter-platform collaboration will be essential to prevent the spread of illegal content online.
5. We will continue to work through the challenges of age assurance with experts, legislators and the rest of the market. Age assurance is a key priority for Pinterest in order to help protect the safety of both teen Pinners and those too young to open an account (under 13s). These are industry-wide challenges, technological solutions continue to evolve, and we remain committed to exploring the best ways to combat this issue. Unless and until age assurance technology works with greater efficacy, teens will still find ways to circumvent the age assurance process. Similarly, there are active debates regarding whether age assurance regimes may introduce undue burdens on an internet user’s privacy by preventing them from visiting a site if they wish to withhold information from an internet platform regarding their identity. As these debates continue, we are of course aware that regulatory expectations in this area are likely to become more demanding in the medium term, including in the UK. We support the underlying goals of such initiatives. We will continue supporting cross-industry efforts to develop technological solutions to the challenges posed by age assurance, and thereby enhance the safety of younger children on the internet. We take age assurance measures seriously and continue to monitor best practices in this area. We have taken measures to prevent children below 13 from signing up to use Pinterest. At account registration, we require new users to provide their age. When a Pinner inputs an age below 13, we inform them they are not eligible to join, using a neutral message to discourage any false declarations of age. We also employ blocking mechanisms on mobile and web to prevent users from re-submitting a new age when they are denied access. Further, when we ascertain that a user has self-declared that they are underage on the platform, or when a parent writes in stating that their child is underage, we delete that user’s account. With verification, we allow parents to request deletion of their child's account, as well as access to all personal information associated with their child's account, including Pins saved and private Boards. 6

As many in the technology industry have noted, the methods for determining the age or age range of an online user is challenging. For this reason, we are continuing to evaluate our approach to age assurance and take into account the UK Information Commissioner's opinion/guidance on this issue. We have already taken a number of content and privacy measures that specifically protect teens in the UK on the platform: Advertising Changes We have ceased displaying paid targeted advertisements to users between age 13 and 17 in the UK. Adapting Product Experiences for Teens In addition to providing users with educational resources, we have also adopted a number of changes to Pinterest. For example, for UK teens, privacy personalisation sliders default to “off” and cannot be changed. Teens will not receive personalised Pinterest recommendations based on their off-Pinterest activity, and we will not use their Pinterest activity to advertise Pinterest to them on other services. The privacy personalisation sliders also control the personalisation of advertising using a user’s off-Pinterest activity, but this does not apply to teens on Pinterest since they have been excluded from paid targeted advertising, as explained below. For teens, the “Search Privacy” setting is defaulted to “on.” The “Search Privacy” setting means that Pinterest users have a tag added to their profiles, which tells Google, Bing, or other search engines not to include their profile information in search results. In addition, teens have notifications defaulted to “off” (excluding routine account service messages), but can choose to receive notifications through their Privacy and Data Settings. Monitoring Messaging We have a strong interest in protecting teens from unwanted contact from adults. We recently implemented a significant change to the default messaging settings for teens to make those settings more restrictive. The default messaging settings for teens now prevent strangers from messaging teens. As a result, the default setting for teens blocks messages from individuals not connected to those users on Pinterest. Pinterest Help Centre The Pinterest Help Centre also provides information to parents of teens on Pinterest which explains our minimum age requirements, provides Pinterest privacy resources and specifies ways for parents to notify us if they suspect their underage child has a Pinterest account, so it can be deleted. Other Considerations: We have considered whether separate platforms for those over and under 18, and/or providing age-specific content to those two groups, would make Pinterest safer. However, we have concerns about the efficacy of these proposals in achieving safety for teen users, and therefore their proportionality. For example:  We aspire to create a positive environment for all users through our content moderation efforts, regardless of the user's age. Separating the two age groups would not put a stop to policy-violating content and we would still encounter the same moderation challenges. 7

Introducing additional and separate content moderation expectations for teen users risks diluting our existing moderation efforts, which since 2017 have led to a significant reduction in the prevalence of high risk content on Pinterest.  The creation of a two tiered moderation system could undermine efforts and divert resources working diligently to ensure that the Pinterest platform is safe for all users, including those who are over 17 who may also be particularly vulnerable to specific content. We are therefore currently prioritising improving content moderation processes on the existing, single platform in order to improve the safety of teen and adult users alike. We appreciate the opportunity to engage on these issues and will continue our commitment to learn and implement best practices in this area. We hope the actions we’ve outlined will have a meaningful impact as we continue to make improvements. 8

Signed: __ ____________

On Behalf of Pinterest Europe Limited 9
Report Sections
Investigation and Inquest
On the 21st November 2017 I opened an investigation touching the death of Molly Rose Russell, aged 14 years old. I opened an inquest on the 1st December 2017. The inquest concluded on the 30th September 2022. The conclusion of the inquest was “Molly Rose Russell died from an act of self-harm whilst suffering from depression and the negative effects of on-line content”. The medical cause of death was 1a Suspension.
Circumstances of the Death
Molly Rose Russell was found having hanged herself on the Twenty-First of November 2017. Molly was 14 years old. Molly appeared a normal healthy girl who was flourishing at school, having settled well into secondary school life and displayed an enthusiastic interest in the Performing Arts. However, Molly had become depressed, a common condition affecting children of this age. This then worsened into a depressive illness. Molly subscribed to a number of online sites. At the time that these sites were viewed by Molly some of these sites were not safe as they allowed access to adult content that should not have been available for a 14-year-old child to see. The way that the platforms operated meant that Molly had access to images, video clips and text concerning or concerned with self-harm, suicide or that were otherwise negative or depressing in nature. The platform operated in such a way using algorithms as to result, in some circumstances, of binge periods of images, video clips and text some of which were selected and provided without Molly requesting them. These binge periods, if involving this content are likely to have had a negative effect on Molly. Some of this content romanticised acts of self-harm by young people on themselves. Other content sought to isolate and discourage discussion with those who may have been able to help. Molly turned to celebrities for help not realising there was little prospect of a reply. In some cases, the content was particularly graphic, tending to portray self-harm and suicide as an inevitable consequence of a condition that could not be recovered from. The sites normalised her condition focusing on a limited and irrational view without any counterbalance of normality. It is likely that the above material viewed by Molly, already suffering with a depressive illness and vulnerable due to her age, affected her mental health in a negative way and contributed to her death in a more than minimal way.
Related Inquiry Recommendations

Public inquiry recommendations addressing similar themes

Pre-screening by Internet Providers
IICSA
Harmful Algorithmic Content Promotion
Mandatory Reporting
IICSA
No mandatory child abuse reporting
Age Verification Online
IICSA
Harmful Algorithmic Content Promotion
Publish interim online harms code of practice
IICSA
Harmful Algorithmic Content Promotion
Pre-screen material before upload
IICSA
Harmful Algorithmic Content Promotion
Redraft canonical crimes as crimes against the child
IICSA
No mandatory child abuse reporting
Westminster whistleblowing policies for CSA
IICSA
No mandatory child abuse reporting
Government department safeguarding policy reviews
IICSA
No mandatory child abuse reporting
Political party safeguarding policies
IICSA
No mandatory child abuse reporting

Data sourced from Courts and Tribunals Judiciary under the Open Government Licence.