The 90-year-old case of a dead snail in a ginger beer provides a model for making digital platforms liable for harm to their users

In 1928, a woman named Mary Donoghue bought a bottle of ginger beer from a cafe in Paisley, Scotland, and then fell ill after finding a dead snail inside. She sued the manufacturer and won.

The ruling enshrined the concept of “duty of care”—a legal obligation to protect a customer, tenant or worker from harm. “The rule that you are to love your neighbor becomes in law ‘You must not injure your neighbor,’” proclaimed Lord Atkin of Aberdovey, who presided over the case in 1932 in Britain’s House of Lords, which reversed two lower courts to rule for Donoghue. Now, as Western regulators struggle with how to restrict the most harmful online content while at the same time protecting free speech, Britain has come to see the nearly century-old principle as a possible solution.

“Duty of care” helped to lay the foundation for modern health and safety laws in the U.K. After World War II, the concept also took hold across the Atlantic, where U.S. courts made it the basis of modern American negligence law. The principle doesn’t prescribe specific rules or remedies. Instead, it makes an employer, for example, generally liable for the safety of its employees. Britain’s Health and Safety Act doesn’t specify how many fire extinguishers should be installed in an office building, but it holds the landlord responsible for making sure there are enough.

The idea of policing the internet like a public space gets around the challenge of trying to restrict specific content itself.

In the same way, duty of care holds the owner of a public space responsible for the well-being of its visitors: A supermarket chain is ultimately responsible if a customer slips on the proverbial banana peel.

The concept is now being reimagined by the British government as the cornerstone of sweeping legislation aimed at forcing big tech companies to police their content better. Under the government’s proposal, a new regulator would have the power to require companies to protect users from a number of identified online harms—such as pornography, extremist content and cyber bullying. The regulator could issue big fines, block access to offending websites or refer tech executives for prosecution. On Thursday, the traditional Queen’s Speech outlining the government’s priorities for the coming year included advancing the legislation.

Lorna Woods and Will Perrin, two independent researchers in the U.K. who had worked together on a previous privacy-rights project, came up with the new spin on duty of care last year in response to the government’s request for ideas. The 2017 suicide of British teenager Molly Russell added urgency to government efforts to tackle harmful online content. Russell’s father said that she took her own life after viewing images of self-inflicted harm on Facebook Inc. ’s Instagram. In the wake of national outrage, Instagram said earlier this year that it would use its algorithms to weed out such images.

Mr. Perrin, a former civil servant, helped to set up the powerful media regulator Ofcom, the U.K.’s version of the Federal Communications Commission, in 2003. Ms. Woods, a media lawyer and professor of internet law at the University of Essex, co-wrote a book on European broadcasting policy.

Over tea and sandwiches last year, the pair talked through the different terms that had been used to describe social media in a legal context, looking for the right analogy. They tried “platform,” “pipe” and “intermediary.” Nothing seemed to fit. Then “we thought of a ‘public space,’” says Ms. Woods. “People do different things online. It was just like ‘how do we regulate spaces?’”

A legal infrastructure already existed: Duty of care, a framework “that’s applied in the context of a distinct environment,” Ms. Woods says.

It also sidesteps another Big Tech defense: that platforms like Facebook or Twitter shouldn’t be held accountable for content that third parties post. Under duty of care, “It doesn’t matter who dropped the banana peel,” says Susan Benesch, an associate professor at the Berkman Klein Center for Internet and Society at Harvard University—it happened in the public space provided by the platform.

There are critics of the approach. Alex Abdo, litigation director at the Knight First Amendment Institute at Columbia Law School, says that efforts to define harm online are too hazy. “In the physical world, when you’re talking about a duty of care, there often are obvious concrete obligations that custodians of physical space have,” such as making sure people don’t trip, he says. “If the goal is to figure out how to restrain lawful speech to prevent real harms,” he says, the duty of care concept isn’t specific enough. “You need to propose exactly what you expect.”

Officials of some tech companies, including Facebook and Twitter, have been talking with the British government to try to reach a balance they can accept. “These are complex issues to get right,” said Facebook’s U.K. head of public policy, Rebecca Stimson. People familiar with Google parent Alphabet and Snap Inc. said that the companies saw the initial U.K. proposals as too broad and covering too many harms; the companies declined to comment. An official at the Incorporated Society of British Advertisers, a trade group that represents digital advertisers, said that using the duty of care framework was a good idea but that regulators should also draw on human rights laws.

Trying to spur progress on the measure amid the government’s intense focus on Brexit negotiations, Mr. Perrin and Ms. Woods published their own draft legislation on Wednesday. To simplify the plan, they proposed that Ofcom do the enforcing rather than a new regulator. Their outline calls on Ofcom to establish how companies should deal with harms relating to terrorism, hate speech, fraud, “threats to democracy” and the safety and well-being of people under the age of 18.

The regulator would dole out penalties based on the level of harm experienced by people going online. It could also allow civil action, potentially making the U.K. a hot spot for “online-harm” lawsuits in the same way that the country’s strict libel laws have made London a favorite venue for libel suits, says Merrill April of the British employment law firm CM Murray. “‘Because you didn’t have x and y in place, my child, or I, would suffer harm—that’s extremely familiar territory to personal injury lawyers,” Ms. April says.

Though legislation has yet to reach Parliament, the U.K. proposal is being used as a model elsewhere. This past summer, the French government published a report on regulating Big Tech by experts who spent weeks observing Facebook’s operations. They proposed setting up rules to audit social media firms based on “the Anglo-American ‘duty of care.’”

Write to Parmy Olson at parmy.olson@wsj.com