Miranda Avanzi

PFD Report Partially Responded Ref: 2024-0626
Date of Report 14 November 2024
Coroner Ian Potter
Response Deadline est. 9 January 2025
441 days overdue · 1 response outstanding
Response Status
Responses 2 of 3
56-Day Deadline 9 Jan 2025
441 days past deadline — 1 response outstanding
About PFD responses

Organisations named in PFD reports must respond within 56 days explaining what actions they are taking.

Source: Courts and Tribunals Judiciary

Coroner’s Concerns
as follows:

The police found a printout from a blog post close to Ms Avanzi, while investigating the initial circumstances of her death. That 10-page document contains a step-by-step guide (with the inclusion of pictures and diagrams) on how to ‘succeed’ in ending one’s life by ‘partial hanging’. It was clear that this guide had been followed in the circumstances of this case.

While it is not obvious which website or forum this particular guide came from, it does cite numerous sources including ‘ ’ and ‘ ’. Just a basic search on Google or other search engines, reveals a significant number of forums and blogs, where users are able to obtain all manner of guides to completing suicide. Many of these sites have no, or no useful requirement for any type of age verification. The search engine suggests, at the top of the page, that help is available by dialling 999, which would appear to be an acknowledgement that the content resulting from the search is likely to be concerning and that person undertaking the search is likely already highly vulnerable.

I am concerned that the ready availability of such information, that provides clear instructions and advice for individuals wanting to end their own life at their own hands is of the utmost concern.
Responses
DSIT
17 Dec 2024
The DSIT highlights the recently enacted Online Safety Act 2023, which makes intentionally encouraging suicide a priority offence and places duties on online platforms. While implementation phases are ongoing, with illegal safety duties fully in effect by Spring 2025, the government has already established the regulatory framework to tackle online harms including suicide content. AI summary
View full response
Dear Mr Potter,

I want to thank you for the opportunity to respond to this Report to Prevent Future Deaths, regarding the death of Miranda Emilia Avanzi. I am responding in place of the the Secretary of State for Culture, Media and Sport, given my department’s responsibility for the Online Safety Act 2023 (‘the ‘Act’). I want to start by expressing how deeply saddened I was to read about the tragic circumstances around Ms Avanzi’s death and would like to offer my sincere condolences to her family, friends and all those affected.

As you will be aware, under section 2(1) of the Suicide Act 1961 (the responsibility of the Ministry of Justice) it is an offence for a person to do an act capable of encouraging or assisting the suicide or attempted suicide of another person, with the intention that their act will encourage or assist suicide or an attempt at suicide. The criminal offence has a high threshold to avoid criminalising people who are expressing suicidal feelings and those offering them support, by for example, sharing their own experiences. Whether a prosecution for encouraging or assisting suicide can be brought is a matter for the Crown Prosecution Service following a police investigation. As you may know, the ‘Act’ is a regulatory framework which gained Royal Assent in October 2023, and the government has since been working with Ofcom, the independent regulator, to implement it as quickly and effectively as possible. The Act tackles illegal and legal forms of online suicide content in a number of ways.

Under the Act, intentionally encouraging or assisting suicide is a priority offence. Priority offences reflect the most serious and prevalent illegal content and activity, against which companies must take proactive measures, as well as ensuring their services are not used to facilitate or commit a priority offence. When in force, all services in-scope of the Act will need to proactively prevent all users from being exposed to illegal user-generated suicide content.

User-generated content is content which has been generated directly on the service by a user of the service, or uploaded or shared by a user of a service as opposed to generated, uploaded or published by the site or service owner. A service is in-scope of the Act if it allows user-generated content, for example images or content descriptions uploaded by users. This will include, for example, pro-suicide forums and online marketplaces which allow user-generated content. All in-scope services must proactively tackle regulated content that amounts to an offence under the Suicide Act 1961, and they must minimise the length of time for which such content is present.

We expect the illegal content duties to be fully in effect from Spring 2025. The Act also introduced a new communications offence of encouraging or assisting serious self-harm. This criminalises relevant acts capable of intentionally encouraging or assisting the serious self-harm of another person, and was commenced on 31 January 2024.

Furthermore, the Act imposes child safety duties on user-to-user and search services. When fully in effect, the Act will require all user-to-user services and search services that are likely to be accessed by children to assess the risk of any content harmful to children being encountered by children on their service.

Providers of these user-to-user services have a duty to use highly-effective age assurance (HEAA) to prevent children encountering the ‘Primary Priority Content’ (PPC) which is harmful to children, where they identify such content on the service. PPC includes content which encourages, promotes or provides instructions for suicide. Search services are not required to employ HEAA, but must minimise the risk of children encountering this content in search results or on the pages they land on when they click on them.

As you highlight in your report, users are able to access harmful suicide content via search engines. Under the Act, search services have targeted duties which require them to minimise the risk of all users encountering illegal content, including illegal suicide and self-harm content. In practice, this could look like removing results for sites that are known to host illegal suicide and self-harm content. Search services must also take or use, where proportionate, user support measures which might for example, signpost users towards sources of support. You state in your letter that you have already seen a source of support being offered by a search engine and that this indicates that the content being viewed is harmful. Whilst the Act does not restrict adult users’ access to legal content, as noted above, search services must provide children with additional protections from PPC.

Ofcom has published draft Children’s Safety Codes which include proposed recommended measures that regulated services put in place to fulfil their child safety duties. The proposed measures for search services include: downranking or blurring of PPC where a user is believed to be a child; filtering identified PPC out of their search results; and provision of crisis prevention information in response to known PPC-related search requests regarding suicide.

Ofcom has also listed proposed measures for user-to-user services, in addition to the required use of HEAA to prevent children encountering PPC. These measures include: complaints and reporting systems that are easy to access and use; well-resourced content moderation systems or processes designed to swiftly take action against identified content harmful to children; and measures for recommender systems and algorithms that ensure content likely to be PPC is not recommended to children.

Whilst the Act will not restrict adult users’ access to legal content, the Act will enable adult users to have more choice about what legal content they do or do not engage with. When the legislation is fully in force, services over the designated threshold (also known as ‘Category 1 Services’) will be required to provide adult users with tools that will enable them to choose whether or not to engage with certain types of legal content. This includes content that encourages, promotes or provides instructions for suicide and self-harm. When applied, these tools will reduce the likelihood that they are exposed to such content or will alert them to the nature of it. Furthermore, services over the designated threshold will be required to remove legal content that is prohibited in their terms of service. This means that adult users can then make informed choices about which platforms they choose to use.

From December 2024, all in-scope services must start assessing the risk of users encountering illegal content on their platforms. As stated above, we then expect the illegal safety duties to be fully in effect from Spring 2025. From January 2025, services must also assess the access of children on their platforms, and we expect the duties to protect children from content that is harmful to them to be fully in effect by Summer 2025. If services do not comply with these duties, the Act provides Ofcom with

powers to take robust enforcement action. This will include issuing substantial fines (up to 10% of qualifying worldwide revenue) or taking action to implement business disruption measures.

My officials work closely with officials in the Department of Health and Social Care who coordinate the government’s approach to preventing suicides and improving people’s mental health. Addressing factors set out in the Suicide Prevention Strategy for England will be an important part of delivering on this ambition to reduce suicide rates. The strategy identifies both tackling online harms and tackling emerging methods as priority areas for action, and this includes exploring further opportunities to address online harms including harmful content shared in pro-suicide websites and forums. I appreciate the importance of looking at actions to reduce access and awareness of concerning methods.

I would like to thank you for bringing your concerns to my attention. As the Secretary of State responsible for the Act, I am committed to working with Ofcom to ensure the Act is implemented as quickly and effectively as possible.
OFCOM
9 Jan 2025
Ofcom is actively implementing the Online Safety Act 2023 by developing detailed codes of practice and guidance for online services to identify and remove illegal suicide content. They are working with platforms to ensure compliance with new duties, which will fully come into effect from Spring 2025, and will take enforcement action for non-compliance. AI summary
View full response
Dear Mr Potter, We write in response to your Regulation 28 Report to Prevent Future Deaths (‘the Report’), dated 14 November 2024, following the inquest into the death of Miranda Emilia Avanzi. Firstly, we would like to offer our deepest condolences to Ms Avanzi’s family and loved ones on behalf of Ofcom. A loss in such terrible circumstances must be incredibly difficult for her family. We thank the coroner’s office for bringing to our attention the potential role of search engines and online platforms in the tragic circumstances of Ms Avanzi’s death. This is important to us as we begin to implement and enforce the Online Safety Act 2023 (‘Act’). We wish to assure you and the family of the deceased that Ofcom is committed to taking action to ensure that search engines and other online platforms take their new duties of protecting people from illegal and harmful suicide content seriously.

1. Overview of our response to the Report In this response, we set out our proposed actions in response to the ‘matters of concern’ raised in the Report. We have split these matters of concern out under three main headings in bold below, along with a brief summary of our response to each:
• The ready availability of information that provides clear instructions and advice for individuals wanting to end their own life through basic searches on Google and other search services. We explain how we are providing guidance to services on how to identify content that illegally encourages or assists suicide. We also explain the duties on search providers to remove or lower the ranking of illegal suicide content, and to include certain crisis prevention information in response to search queries about suicide;
• Forums and blogs where users are able to obtain information on suicide methods have no, or no useful requirement for any type of age verification. We explain the duties on user-to- user services, including taking down illegal suicide content when it is identified. We also explain the duties on services likely to be accessed by children to prevent children from encountering certain content that is particularly harmful to them, including content that promotes, encourages or provides instructions for suicide. Finally, we explain our recommendations about how services can use age assurance to protect children from harmful content;
• Crisis prevention support information shown to users at the top of search results, for instance calling 999. In relation to this matter, we recap the duties on search services to

Classification: HIGHLY SENSITIVE have systems and processes designed to remove or lower the ranking of illegal suicide content of which they are aware in search results, as well as our recommendation that search services should provide crisis prevention information in response to certain search queries concerning suicide. We also set out how regulated search services should assess the risk of users encountering illegal suicide content, and the harm that could arise from this content. However, before setting out the above in more detail, it may be useful to explain the relevant duties that the Act imposes on online search and user-to-user services,1 how we plan to hold companies to account for compliance with the duties, the role of our Codes of Practice, and the timetable for different phases of our implementation of the Act.
2. Detailed background on Ofcom’s implementation of the Online Safety Act The Online Safety Act 2023 The Act makes persons that operate a wide range of online services legally responsible for keeping people safer online. It covers companies that provide search or user-to-user services and have links to the UK, which is understood according to the following criteria:
• Has a significant number of UK users; or
• Has UK users as one of its target markets; or
• Is capable of being used by UK users, and there are reasonable grounds to believe that there is a material risk of significant harm to UK users. Any service which meets one more or the above criteria, and which is not exempt2, will be expected to comply with the relevant duties under the Act. Provisions of the Act: legal duties on service providers Among other things, the Act:
• Appoints Ofcom as the regulator for online safety and confers upon us a number of powers and duties (set out in detail below).
• Imposes a number of duties on those regulated services, in order to improve their systems and processes to ensure the safety of their users. These include: o Duties on search services and user-to-user services to assess the risks their services pose to users in relation to illegal content (including illegal suicide content, which we address in more detail below), and content that is harmful to children (including content that encourages, promotes or provides instruction for suicide), and take steps to mitigate and manage those risks. o Duties on search services to minimise the risk of individuals encountering illegal content (including illegal suicide content) and children from encountering content

1 ‘Search services’ and ‘User-to-user services’ are terms used and defined in the Act. Search services are services that are, or include, a search engine, which allows users to search more than one website or database. A user-to-user service (U2U service) is an internet service by means of which content that is generated directly on the service by a user of the service, or uploaded to or shared on the service by a user of the service, may be encountered by another user, or other users, of the service. 2 A number of exemptions also apply as set out in Schedule 1 of the Act. See: Overview of regulated services.

Classification: HIGHLY SENSITIVE that is harmful to them (including content that encourages, promotes or provides instruction for suicide) in search results; o Duties on user-to-user services to swiftly take down illegal content (including illegal suicide content) when it is identified, and to prevent children from encountering content that is harmful to them (including content which encourages, promotes or provides instructions for suicide); and o Additional duties for the largest and highest-risk services allowing their users to increase control over the content they encounter on those services (including content that encourages, promotes or provides instruction for suicide).
• Requires Ofcom to issue a number of regulatory publications to help regulated services understand how they can comply with their legal duties. These include Codes of Practice setting out recommended measures services can take to mitigate risks of harm in compliance with their duties, and resources to help companies assess, understand and manage risk. Timeline for duties under the Act coming into force Although the Act is now law, there are numerous procedural steps needed for the new regime to be fully implemented, and these steps need to be completed before services’ legal duties under the regime – and Ofcom’s ability to enforce those duties – come into force. These steps include:
• The completion of public consultations (the first, on illegal harms, closed on 23rd February 2024, and the second on protecting children from harms online, closed on 17 July 2024);
• Ofcom publishing final statements (the Illegal Harms Statement was published on 16 December 2024 and the Protection of Children Statement will be published in April
2025);
• Services completing Risk Assessments designed to help them understand and manage the risks of harm to their users; and
• Parliament approving Ofcom’s final Codes of Practices. The duties in the Act are not yet enforceable, however we have already been encouraging service providers in scope of the Act to take meaningful steps to improve safety on their platforms. To this end, we are committed to driving industry improvements by engaging with the riskiest services via ongoing ‘regulatory supervision.’ We also have a dedicated team for identifying, prioritising and escalating emerging issues, particularly on services where we do not have an existing supervisory relationship, which we call the Triage team. The purpose of the Triage team is to ensure Ofcom responds effectively, promptly, and proportionately to new or growing harms and risks, focusing our reactive work on the most harmful issues. In October 2024, we wrote to Parliament with an update on our progress on implementing the Act. This included explaining what we have done so far, what action service providers must take in 2025, and included our updated roadmap for the next stages of our work. We have summarised below our intended plans for implementation, and in diagram form in Figure 1 in the annex below. This timeline shows our key milestones but is not a comprehensive guide to everything we will produce over the first years of the regime. As part of our preparatory work for implementation, we have been actively engaging with a range of expert stakeholders including government, law enforcement, and charities such as the Samaritans to develop our understanding, expertise and evidence base in relation to suicide, and to ensure that we are aware of developing areas of risk. We have also been concentrating on growing our internal

Classification: HIGHLY SENSITIVE expertise in relation to this complex and important harms area, including by commissioning research.3 We will continue our programme of engagement with relevant experts as the regime evolves. Phase One: Illegal Harms As set out above, we published our statement on protecting people from illegal harms online (‘Illegal Harms Statement’) which includes our final Illegal Harms Codes of Practice and guidance, on 16 December 2024. The Codes have been submitted to the Secretary of State who has laid them in Parliament for 40 days. Concurrently, regulated service providers have until 16 March 2025 to undertake their illegal content risk assessments which we explain in more detail below. Following approval by Parliament, the Codes will come into force 21 days after they have been issued, which we expect to be from 17 March 2025. At this time, the illegal harms safety duties become enforceable, and we can begin investigations and – following the conclusion of those – impose sanctions if we find that services are not compliant with these duties. Illegal Harms - Assessing risks The Act requires Ofcom to produce a Register of Risks for Illegal Harms and guidance to assist services in conducting their own risk assessment. Our final risk assessment guidance sets out a four- step process which we propose as the best way to ensure that a service’s assessments meet their obligations. Further, our Risk Profiles set out an explanation of factors in service design and operation that increase the risk of harm. Service providers will be required to take account of our Risk Profiles when conducting their risk assessments. The information contained in the risk profiles is sourced from Ofcom’s own Register of Risks. For illegal suicide content, we set out risk factors relating to:4
• service type e.g. discussion forum and chat room services can act as spaces where suicide is assisted or encouraged;
• user base e.g. users who are in vulnerable circumstances such as those suffering with their mental health and who might be experiencing thoughts of suicide or self-harm are more likely than other users to be at risk from the effects of this type of content;
• functionalities of the service e.g. commenting on content is a risk factor, as there is evidence of people using comments to encourage suicide of the person that distributed the content;
• recommender systems e.g. research suggests that where there are vulnerable users who are engaging with harmful content, such as self-harm or suicide content, recommender systems are more likely to create a ‘filter bubble’ or ‘rabbit hole’; and
• business models and commercial profiles e.g. there is some evidence to suggest that advertising-based revenue models may be a risk factor for suicide and self-harm content.

3 See, for example, our research on suicide content and search services: ‘One Click Away: a study on the prevalence of non- suicidal self injury, suicide, and eating disorder content accessible by search engines’. See also our research into children’s experience of suicide, self-harm and eating disorders content: ‘Experiences of children encountering online content relating to eating disorders, self-harm and suicide’ 4 See Register of Risks, Chapter 15 (Encouraging or assisting suicide (or attempted suicide)).

Classification: HIGHLY SENSITIVE Regulated search services such as Google, as well as regulated user-to-user services, will need to complete an Illegal Harms Risk Assessment, including assessing the risk of illegal content such as illegal suicide content, by 16 March 2025. If regulated service providers conclude from their risk assessment that they are at risk of illegal harm(s), they will need to take steps to mitigate the risk of harm they have identified on their service. Illegal Harms – Mitigating risk of illegal harms The Act imposes duties on all regulated user-to-user services relating to protecting their users from illegal harms. That will require those services to understand and take steps to manage and mitigate the risks of users encountering illegal suicide content (i.e. which amounts to the offence of encouraging or assisting suicide), or of their services being used for the commission or facilitation of this offence. User-to-user services will also have to swiftly take down illegal suicide content when it is identified. The duties imposed on regulated search services include a duty to use systems and processes to minimise the risk of individuals encountering (among other things) illegal suicide content in search results. We also published Ofcom’s Illegal Content Judgements Guidance (ICJG) on how services can identify illegal suicide content, as discussed further below. The ICJG provides guidance to user-to-user and search service providers on how they may identify illegal content (i.e. content which may reasonably be inferred to amount to a relevant offence) including under Section 2 of the Suicide Act 1961 (‘the suicide offence’). The offence of encouraging or assisting suicide is a priority offence under the Act. The Act requires Ofcom to produce Codes of Practice setting out the measures that in-scope services may take to comply with their duties under the Act.5 According to our recently published final Illegal Harms Codes of Practice,6 they will need to take steps to mitigate the harm from suicide material in the following ways:
• Search and user-to-user services will need to assess the risk of illegal suicide content being posted on their service;
• If they are high risk for it, they will need to ensure their content moderation teams are appropriately resourced and trained to deal with that content;
• User-to-user services will need to continuously test their recommender systems as they make changes to them to assess whether they are inadvertently promoting illegal suicide content; and
• User-to-user services will need to have systems for removing illegal suicide content quickly as soon as they become aware of it, while search services will need to remove or lower the ranking of illegal suicide content in their search results. While service providers are not required to implement all measures in our Codes of Practice, in the event that they choose not to take the steps recommended, they will need to be able to explain how their chosen approach allows them to be compliant with their legal duties. We explain below some of the relevant measures we have recommended in our Illegal Harms Codes of Practice.

5 Section 41 of the Act. 6 See here for the illegal content Codes of Practice for search services. See here for the illegal content Codes of Practice for user-to-user services.

Classification: HIGHLY SENSITIVE Phase Two: Protecting children from harms online Services that are likely to be accessed by children will have additional duties to protect children’s online safety. As set out above, we published our consultation on Protecting children from harms online (‘Protection of Children Consultation’) on 8 May 2024 and our statement will be published in April 2025. Protecting children - Assessing access Although the Report concerned the death of an adult, we take this opportunity to set out information about services’ duties to protect children from content harmful to them. This is because such content includes content that promotes, encourages or provides instructions for suicide. All user-to-user and search services are required to carry out children’s access assessments. A children’s access assessment is a process for establishing whether a service is ‘likely to be accessed by children’ within the meaning of the Act.7 We intend to publish our final Children’s Access Assessments Guidance in January 2025. Regulated services will then have three months to complete children’s access assessments. If they conclude that their service is likely to be accessed by children, they will need to carry out a children’s risk assessment and comply with the children’s safety duties in the Act. Protecting children - Assessing risks The Act requires Ofcom to produce a Children’s Register of Risks, and guidance to assist services in conducting their own children’s risk assessments. Our draft Children’s Risk Assessment Guidance sets out a four-step risk assessment process which we propose as the best way to ensure that services meet their risk assessment obligations, and based on our children’s Register of Risks, our Children’s Risk Profiles set out the risk factors that we consider are associated with an increased risk of harm on services. The Act also requires Ofcom to provide guidance for providers of user-to-user and search services, which gives examples of content, or kinds of content, that Ofcom considers to be, or considers not be, harmful to children (‘Guidance on Content Harmful to Children’). This is intended to support service providers that may need to make judgements about whether content on their service amounts to content that is harmful to children as defined in the Act. Following the outcome of the children’s risk assessment, service providers will need to take steps to mitigate the risks of harm they have identified on their service. Protecting children - Mitigating the risk of harm The Act imposes duties relating to content which is harmful to children. Content which “encourages, promotes or provides instructions for suicide” (‘suicide content’) has been designated as ‘Primary Priority Content’ under the Act. Where regulated services are likely to be accessed by children within the meaning of the Act, they will also have to understand the risks of, and take steps to prevent (in the case of user-to-user services) or minimise the risk of (in the case of search services) child users from encountering, suicide content. The Act requires Ofcom to produce Codes of Practice setting out the measures that in-scope services may take to comply with their duties under the Act. The draft Children’s Safety Codes set out the measures that services can take to comply with the Children’s safety duties and recommend proportionate systems and processes across a number of areas, including age assurance (for user-to-

7 Section 37 of the Act explains the meaning of “likely to be accessed by children”.

Classification: HIGHLY SENSITIVE user services only); recommender systems; content moderation; governance and accountability; user tools, and user reporting and complaints.8 Ofcom’s Protection of Children Statement We will publish our Protection of Children statement in April, which will set out our final position in relation to children’s risk assessments and safety measures. Providers of services that are likely to be accessed by children will then have three months to carry out children’s risk assessments. At this point we will also submit the Children’s Safety Codes to the Secretary of State, which, subject to approval, are to be laid in Parliament for 40 days. Phase Three: transparency, user empowerment, and other duties on categorised services Phase three of the implementation of the Act focuses on additional duties for categorised services, including transparency, and other duties such as user empowerment which will apply only to Category 1 services. Duties for Category 1 services include a duty to include, to the extent that it is proportionate to do so, features which adult users may use or apply if they wish to increase their control over certain kinds of content, including content which encourages, promotes or provides instructions for suicide. We issued a Call for Evidence regarding our approach to phase three on 25 March 2024. Alongside this, we also published our advice to Government on the on the thresholds which would determine whether or not a service falls into Category 1, 2A or 2B. The Secretary of State has laid in Parliament draft Regulations setting out the threshold conditions for categorisation,9 based on Ofcom’s advice.
3. Our response to the ‘Matters of Concern’ In the Report, you set out a number of serious concerns. We have separated these out under three main headings and address each in turn. The ready availability of information that provides clear instructions and advice for individuals wanting to end their own life through basic searches on Google and other search services. In our Illegal Content Judgement Guidance (ICJG), we note that it is an offence to intentionally encourage or assist the suicide (or attempted suicide) of another person. Online material which amounts to this offence is considered illegal suicide content under the Act. The ICJG chapter on the suicide offence states that “content which provides specific, practical or instructional information on suicide methods is likely to be capable of constituting assistance for the purposes of this offence in an online context”. For example, this could include “details on the most effective way of taking one’s own life, or tips about how to do so in a way which avoids interruption from others.”10 Search engines must comply with their duties to assess the risks of such illegal suicide content appearing in search results. As part of Ofcom’s Codes of Practice, we recommend that they do this by having systems and processes designed to remove or lower the ranking of illegal suicide content in search results. The type of material that you express concern about in the Report, depending on

8 For more information, see here for the Protection of Children draft Code of Practice for user-to-user services. See here for the Protection of Children draft Code of Practice for search services. 9 The Online Safety Act 2023 (Category 1, Category 2A and Category 2B Threshold Conditions) Regulations 2025. 10 Ofcom, 2024. Illegal Content Judgements Guidance (ICJG), p. 175.

Classification: HIGHLY SENSITIVE the circumstances, may amount to the suicide offence. If so, it should be removed from or have its ranking lowered in search results, when identified. Large general search services should also provide crisis prevention information in response to search queries regarding suicide generally or seeking specific, practical or instructive information regarding suicide methods. Forums and blogs where users are able to obtain information on suicide methods have no, or no useful requirement for any type of age verification. Forums and blogs may or not be considered within scope of the duties under the Act. This would depend on whether a user can create or share content that can be viewed by other users of the service. If a forum or blog does not support such usage, it would be out of scope of the user-to-user duties under the Act. However, it would still be relevant for the duties on search engines (as explained above) which are required to remove or lower the ranking of webpages hosting illegal suicide content, whether or not those webpages constitute user-to-user services. If, however, such forums and blogs are deemed to be user-to-user services and likely to be accessed by children, they will have specific duties to prevent children from encountering certain kinds of harmful content, even if it is not illegal. This includes content that promotes, encourages or provides instructions for suicide. Our draft Children’s Safety Codes set out the measures we propose to recommend that service providers take to protect children from harmful content such as:
• User-to-user services whose principal purpose is the hosting or dissemination of Primary Priority Content should implement highly effective age-assurance to prevent children from accessing the entire service. We would expect services in this category to include discussion forums or chat rooms where suicide is the primary subject of discussion.
• Services whose principal purpose is not the hosting or dissemination of Primary Priority Content, but which allow Primary Priority Content, will need to implement highly effective age-assurance to prevent children from encountering it on the service. This could apply to a range of social media platforms and messaging apps, as well as discussion forums or chat rooms, unless they choose to prohibit this content.
• Services which use algorithmic recommender systems and which have a medium or high risk of Primary Priority Content on their service, should also implement highly-effective age- assurance to ensure that Primary Priority Content is filtered out of those recommender systems. These measures have been proposed to help reduce children’s online exposure to Primary Priority Content. Our draft Children’s Safety Codes also recommended certain measures to make content moderation systems work more effectively. We proposed measures that target the effectiveness of content moderation systems on both user-to-user and search services. These measures are focussed on making sure services have in place effective systems and processes to act on content that is harmful to children (including content that promotes, encourages or provides instructions for suicide), clear policies on what is allowed, adequate moderation resources and effective systems to prioritise how content is moderated. As set out above, we expect that our draft Children’s Safety Codes will be finalised in April and become enforceable in July 2025. As such, we expect services to take the necessary steps to comply with their duties to prevent children accessing content that encourages, promotes or provides instructions for suicide, including implementing highly effective age checks. Where they fail to comply, we will take robust steps to enforce against them. We would also emphasise that the duty in relation to illegal content for regulated user-to-user services is to take down such content swiftly, once identified. Therefore, if a forum or blog fulfilled

Classification: HIGHLY SENSITIVE the requirements making it a regulated user-to-user service, and they become aware illegal suicide content, then we would expect them comply with their duties beginning with as assessment of the risk of illegal suicide content on their service. If they are high risk for it, they should ensure that their content moderation teams are appropriately resourced and trained. Once those teams identify such content or the service becomes aware of it through other means (e.g. a user report), they should have systems to remove it as quickly as possible. At the same time, if they use recommender systems on their service, they should test any changes they make to ensure that they are not inadvertently promoting illegal suicide content. Finally, you mention that when making searches on Google or other search services for content of the nature that you described, the top of the search results page shows crisis prevention support information to users, for instance calling 999. We note your concern that the offer of such crisis prevention information is an acknowledgement by the search service of the potential vulnerability of a user to the risk of suicide given that they are actively searching for suicide methods. As mentioned above, search engines should have systems and processes designed to remove or lower the ranking of illegal suicide content in search results of which they are aware. When deciding whether removing or lowering the ranking of search content is the more appropriate action according to our Codes, search providers should consider: i) The prevalence of illegal content hosted at the URL or in the database at which the search content concerned is present; ii) The interests of users in receiving any lawful material that would be affected; and iii) The severity of potential harm to users if they encounter the content, including whether the content is priority illegal content and the potential harm to children. In our Illegal Harms Codes, we have also included a measure (as noted above) that large general search services should use systems and processes to detect and provide crisis prevention information in response to search requests made by users that contain: i) general queries regarding suicide; and ii) queries seeking specific, practical or instructive information regarding suicide methods. The crisis prevention information should provide a helpline that is both associated with a reputable health or suicide prevention organisation and is available to all UK users, irrespective of age or geographical location within the UK, for 24 hours per day. It should also provide link(s) to information and support. Our policy intention in recommending this measure is to disrupt user search journeys to minimise the risk of individuals encountering illegal suicide content and provide timely assistance to individuals in distress where the risk of harm might otherwise be severe. Further, to comply with their illegal content risk assessment duties, all providers of regulated search services will need to consider features and functionalities of their service and how they affect the risk of users encountering illegal content, and the harm which could arise from this content. We expect services to have regard to our Illegal Harms Register of Risks (in the present case, the chapters ‘Encouraging or assisting suicide’ and ‘Search’ may be particularly relevant), Risk Profiles and Risk Assessment Guidance. For search services, this would require that they factor in user base demographics including where search queries and results may pose increased risk of harm to vulnerable users such as those at heightened risk of suicide. They will need to take these findings into account in deciding what measures to implement to mitigate and manage these risks, including how to treat illegal suicide content they become aware of.

Classification: HIGHLY SENSITIVE
4. Conclusion We thank the coroner’s office again for bringing to our attention the circumstances of Ms Avanzi’s death and the role that access to online services may have had. Ms Avanzi’s death highlights the impact suicide content can have on people, and Ofcom is committed to holding service providers to account for addressing the risk of harm from this type of content. Evidence included in reports from coroners and other experts will play an important role in our policy proposals and response as we implement the regime, and we will of course take the evidence in your report into account as we continue our policy development. We are committed to holding industry account in accordance with their new duties under the Act discussed in this response. We will work directly with services to promote compliance, including, where appropriate, through targeted supervision. This includes already working with the largest and riskiest search engines and other online platforms to ensure they understand what they need to do to comply and drive improvements, including to better protect people from suicide material that is illegal or harmful to children. Where we identify non-compliance, we will not hesitate to take appropriate enforcement action to protect adults and children from harm. We hope that this response provides helpful information about the significant steps Ofcom is taking as we continue to work through the implementation of the Act. If further information or clarification is required, we would be happy to provide this.
Report Sections
Investigation and Inquest
On 18 July 2024, an investigation was commenced into the death of Miranda Emilia AVANZI, aged 58 years at the time of her death.

The investigation concluded at the end of an inquest on 12 November 2024. The conclusion of the inquest was ‘suicide’.

The medical cause of death was: 1a suspension by ligature
Circumstances of the Death
On 9 July 2024, Miranda Avanzi was found unresponsive at her home address, partially suspended by a ligature . Her death was verified by a paramedic shortly thereafter.

Ms Avanti left notes of intent, clearly indicating a settled intention to end her own life, and instructions that she should not be resuscitated.
Related Inquiry Recommendations

Public inquiry recommendations addressing similar themes

Pre-screening by Internet Providers
IICSA
Harmful Algorithmic Content Promotion
Age Verification Online
IICSA
Harmful Algorithmic Content Promotion
Publish interim online harms code of practice
IICSA
Harmful Algorithmic Content Promotion
Pre-screen material before upload
IICSA
Harmful Algorithmic Content Promotion

Data sourced from Courts and Tribunals Judiciary under the Open Government Licence.