COURT FILE NO.: CV-19-2254
DATE: 20221219
ONTARIO
SUPERIOR COURT OF JUSTICE
B E T W E E N:
COLLETT THORPE & YUNALAND INC.
Preetmohinder Singh Wadhwa, for the Plaintiffs
Plaintiffs
- and -
AUDREY BOAKYE, OHIGIMENTON FRANCIS, GOOGLE INC., GOOGLE LLC & JOHN DOE
James Bunting, for the Defendant Google LLC
Defendants
HEARD: March 30, 2022,
by video-conference, at Brampton, Ontario
Price J.
Reasons For Order
INDEX
NATURE OF MOTION
BACKGROUND FACTS
ISSUES
POSITIONS OF PARTIES
Google’s position
Yunaland’s position
ANALYSIS AND EVIDENCE
a) General principles applying to motions for summary judgment
b) Applying these principles to the present case
i. Are there genuine issues of fact, or of mixed fact and law, that need to be decided by a trial? In particular, do the following raise such issues?
(a) Were the posts defamatory and do they refer to Yunaland?
(b) Did Google publish the defamatory content?
Legislative framework
Canadian jurisprudence
U.S. jurisprudence
(c) Was the defamation actionable? That is, was notice given and, if so, did Google comply with the Act?
(d) Was Google a passive intermediary?
ii. Can the issues be resolved by employing the powers in Rule 20.04(2.1) and (2.2)?
iii. Would granting summary judgment be a proportionate, more expeditious, and less expensive manner of achieving a just outcome than having the action proceed to trial?
c) Should Yunaland be granted leave to make a motion to amend its Statement of Claim and, if so, on what terms?
CONCLUSION AND ORDER
NATURE OF MOTION
[1] Google LLC (“Google”) moves for summary judgment dismissing the action by Collette Thorpe and Yunaland Inc. against it and, if necessary, against Google Inc., being Google’s name before a corporate reorganization.
[2] Collette Thorpe and Yunaland Inc. oppose Google’s motion and request leave to make a motion to amend their Statement of Claim, which Google opposes.
BACKGROUND FACTS
[3] Yunaland Inc. is a daycare business that operates in Brampton, Ontario, where it provides licensed day care services to children. Collette Thorpe is its only director. I will refer to Ms. Thorpe and Yunaland Inc. collectively as “Yunaland” in these reasons.
[4] The defendant Audrey Boakye is Yunaland’s former customer. The defendant Ohigimenton Francis is Ms. Boakye’s husband, who has never been a customer of Yunaland. The defendant John Doe is a friend of Ms. Boakye and/or her husband, or a fictitious person or persons, invented by them, and has also never been a customer of Yunaland.
[5] Google is a corporation with its head office in the State of California, in the United States of America. It has many subsidiaries, including Google Canada Corporation, whose offices are situated in Toronto (“the Toronto address”). When a search of “Google Office in Canada” is done on the internet, it shows the Toronto address as Google’s Office in Canada.
[6] The defendants, other than Google, were the authors of false and negative reviews posted on Google’s online applications, “Google Maps,” “Google Search,” and/or “Local Reviews.” All the defendants other than Google have been noted in default in Yunaland’ action.
[7] Users of Google Search and Google Maps can interact with information about a business by leaving a review, uploading a photo, or reporting inaccurate or outdated information about a business. The tool that allows them to post a review is called “Local Reviews.”
[8] Users must have a Google account to post a Local Review. The username associated with that account appears on each Local Review. Google provides tools to business owners like Collette Thorpe to interact with information posted on Google Maps and Google Search about their business, including by responding directly to Local Reviews. Where a business owner does so, their response appears immediately below the review in question.
[9] Audrey Boakye posted a review with false information against the plaintiff, Collett Thorpe, on her Local Reviews, stating that “Collett Thorpe is a fraud and there is a police investigation against [her].” Later, after Yunaland demanded that Ms. Boakye remove that post and she did so, Ms. Boakye and her husband, or other defendants, fictitious or real, who had never been customers of Yunaland, posted other reviews which were also demonstrably false. One of these stated, in part:
This is the worst daycare ever. Don’t waste your time bringing your kids here. The owner is very unorganized. The playground is not tidy, unpaved and dirty during the Summertime, etc.
[10] Yunaland seeks $700,000.00 in damages from Google for allegedly defamatory statements that Ms. Boakye, her husband, and their “friends” (real or fictitious), posted in online reviews of Yunaland’s business (“the reviews”).
[11] Anthony Nichols, a Google Representative who gave an affidavit in support of Google’s motion for summary judgment, stated at page 4, line 5 of his cross-examination that anyone can use almost any name to create an account as a Google user on the Local Reviews app on Google’s platform without disclosing his or her real identity. Yunaland submits that Google failed to check whether its platform had been misused by the Google users, who were thereby able to conceal their identities and defame Yunaland.
[12] Google’s staff, in the absence of a court order, refused to identify the people who had posted negative reviews of Yunaland.
[13] On April 16, 2019, Yunaland served written notice to Google Inc. under s. 5(1) of the Libel and Slander Act, R.S.O. 1990, c. L.12, at the Toronto offices of Google Canada Corporation, a subsidiary of Google (“the notice”). The notice did not identify the specific words in the other defendants’ reviews that Yunaland alleged were defamatory. Representatives of Google Canada Corporation refused to accept the notice.
[14] Yunaland eventually sent its notice by registered mail to the Google offices in California, where it was received by Google on April 25, 2019.
[15] Yunaland’s Statement of Claim was issued in June 2019 and was served on Google at its offices in California.
[16] Google alleged in its Statement of Defence that Yunaland failed to deliver a Notice to it under s. 5(1) of the Libel and Slander Act. Google submits that it first learned of Yunaland’s allegation that the reviews in question were defamatory when it received the Statement of Claim. Yunaland denies this.
[17] When Google received Yunaland’s’ claim, its lawyers informed Yunaland’s lawyers that if they obtained a court order declaring the reviews defamatory, Google would consider removing the reviews voluntarily.
[18] The defendants Audrey Boakye and her husband Ohigimenton Francis failed to deliver a Statement of Defence and Yunaland requisitioned the Registrar to note them in default in July 2019. When Yunaland later requisitioned a default judgment against them based on their default, the Registrar directed them, pursuant to Rule 19.04(3), to make a motion for that relief, as Rule 19.04(1) did not authorize the Registrar to sign default judgment for the particular relief that Yunaland was seeking.
[19] Yunaland filed its motion for default judgement in the amount of $709,054.31 against Audrey Boakye and Ohigimenton Francis. They also sought a declaration that the reviews that those defendants had posted were defamatory and sought a court order requiring them to remove the reviews. McSweeney J. heard that motion on September 19, 2019.
[20] Google did not oppose Yunaland’s motion because the motion did not seek any relief as against Google. Yunaland did not seek relief against Google in the motion as its discussions with Google’s lawyers were ongoing in September 2019, when the motion was made.
[21] In her endorsement dated September 19, 2019, McSweeney J. stated that the material before her was insufficient to grant Yunaland the order they requested, declaring the defendants’ reviews to be defamatory and requiring the defendants to remove them. McSweeney J. granted Yunaland leave to proceed to an uncontested trial and also granted them leave, pending the trial, to bring a further motion for the declaration and injunctive relief they sought.
[22] On January 21, 2020, Yunaland made a further motion before Van Melle J., with additional supporting evidence, for the declaration and injunctive relief they had previously sought in the motion before McSweeney J. Van Melle J. made the Order requested, declaring the reviews to be defamatory and directing the authors and publishers to remove them. Regional Senior Justice (“RSJ”) Ricchetti made a further Order to the same effect on February 25, 2020, adding a URL address of one of the defendants, which had been inadvertently omitted from Van Melle J.’s Order.
[23] Google now moves for summary judgment dismissing Yunaland’s claim against it. At the hearing of the motion, Yunaland requested leave to make a motion to amend its Statement of Claim, which Google opposed. The Court granted the parties leave to submit written arguments on whether the Court should entertain such a motion. The parties submitted their arguments on that issue on May 13 and May 27, 2022.
[24] At the hearing of its motion for summary judgment, Google argued that it was unable to determine whether the impugned reviews were defamatory until a Court declared them to be so. Google relied, in this regard, on the fact that McSweeney J. dismissed Yunaland’s request for a declaration to that effect on September 19, 2019, when she heard Yunaland’s initial motion for default judgment.
[25] Google, in its material supporting its motion for summary judgment, did not file the motion record that was before McSweeney J. on September 19, 2019. Without it, this Court was unable to ascertain why McSweeney J. had found that the material before her was insufficient to grant Yunaland the declaration they sought. This Court therefore invited counsel to provide it with the motion record that was before McSweeney J., which they did. It revealed that only a portion of the Statement of Claim had been included in Yunaland’s material in that motion. As that fact was not in evidence at the hearing of Google’s motion for summary judgment, this Court gave counsel leave to make written submissions on its significance, if any. Counsel submitted their arguments on that issue from September 8 to 29, 2022.
ISSUES
[26] The present motion raises the following issues:
a. Should Google be granted summary judgment dismissing Yunaland’s claim against it? That is,
i. Are there genuine issues of fact, or of mixed fact and law, that need to be decided by a trial?
ii. If so, can the issues be decided by using the special powers in Rule 20.04(2.1) and (2.2)?
iii. Would granting summary judgment be a proportionate, more expeditious, and less expensive way of achieving a just outcome than having the action proceed to trial?
b. Should Yunaland be granted leave to make a motion to amend its Statement of Claim and, if so, on what terms?
PARTIES’ POSITIONS
Google’s position
[27] Google submits that it ought not be held liable for the defamatory content of the negative reviews of Yunaland that were posted on its electronic platforms because it neither authored nor published those reviews. Additionally, it asserts that even if it is found to be a publisher, it is not liable for the defamatory content by reason of the legal principle of “innocent dissemination.” That doctrine protects a defendant who publishes defamatory content without knowing it to be false.
[28] Google asserts that under Canadian law, passive intermediaries are not publishers of content authored by others and bear no liability if such content is defamatory. It asserts that “the Plaintiffs do not appear to contest this well-established law or allege that Google was a publisher of the Disputed Reviews.”
[29] Google submits that there is not much of an issue as to when it first received notice of the allegedly defamatory posts on its platforms. It submits that, in any event, the Court can readily resolve that issue based on the evidence before it in Google’s motion for summary judgment.
[30] Google asks this Court to grant summary judgment dismissing Yunaland’s claim against it. It submits that summary judgment would be a proportionate, more expeditious, and less expensive means than a trial to achieve a just result, as it would avoid the need for discovery and for a trial to determine damages. It submits that there is no dispute as to the key facts, which are that Google was not the author of the reviews, did not monitor or assess their content before the reviews were posted, and removed the reviews once the Court made an order declaring their content to be defamatory.
[31] Google submits that it is not disputed that once it was made aware of the disputed reviews and of Yunaland’s allegations, it informed Yunaland that if it obtained a court order against the authors of the reviews declaring their content to be defamatory, Google would consider removing them. It is also not disputed that when the Court issued that declaration, Google did remove them.
[32] Google characterizes Yunaland’s position as one that alleges that Google should be liable because it played a passive role, failing to:
• install software that could check for words like fraud, police, scam, etc.;
• failing to stop the reviewers from posting reviews with those words in them; and
• failing to remove the defamatory reviews until the Court made an order declaring their content to be defamatory.
Yunaland’s position
[33] Yunaland submits that Google published the reviews, which were defamatory on their face.
[34] Yunaland disputes that Google was unaware of Yunaland’s position that the reviews were false and defamatory until it received the Statement of Claim. Yunaland says that they made Google aware of those facts before serving its Statement of Claim, and that Google wrongly refused to remove the reviews until Yunaland obtained a court order declaring their content to be defamatory and ordering their authors to remove them.
[35] Yunaland says that it had to incur thousands of dollars in legal fees and suffer a loss of business during the time it took them to serve its Statement of Claim on Google at its offices in the United States and obtain the court order that Google required before Google would remove the defamatory reviews from its Local Reviews website.
ANALYSIS AND EVIDENCE
a) General principles applying to motions for summary judgment
[36] Rule 20.04 of the Rules of Civil Procedure, R.R.O. 1990, Reg. 194 under Courts of Justice Act, R.S.O. 1990, c. C.43, provides that where there is no genuine issue for trial with respect to a claim or defence, the Court shall grant summary judgment. Rule 20.04(2) states, in part:
(2) The court shall grant summary judgment if,
(a) the court is satisfied that there is no genuine issue requiring a trial with respect to a claim or defence; ….
[37] The Court grants summary judgment in the following circumstances:
Where the parties agree;
Where the claim is without merit; or
Where the motion judge is able to dispose of the matter and where the trial process is not required in the “interest of justice.”
[Emphasis added]
See: Combined Air Mechanical Services Inc. v. Flesch, 2011 ONCA 764, 108 O.R. (3d) 1, at paras. 41-44.
[38] For the reasons that follow, I find that none of those circumstance exist in the present case.
[39] The Supreme Court of Canada, in Hryniak v. Mauldin, 2014 SCC 7, [2014] 1 S.C.R. 87, and Bruno Appliance and Furniture, Inc. v. Hryniak, 2014 SCC 8, [2014] 1 S.C.R. 126, reinterpreted Rule 20 of the Rules of Civil Procedure, taking into account the need to preserve the public’s access to justice. The Supreme Court held that the summary judgment rules are to be interpreted broadly, favouring proportionality and access to affordable, timely, and just adjudication of claims.
[40] In Sweda Farms v. Egg Farmers of Ontario, 2014 ONSC 1200, at para. 32, Corbett J. described the approach courts should now take to summary judgment motions, based on the Supreme Court’s decision in Hryniak:
Summary judgment motions come in all shapes and sizes, and this is recognized in the Supreme Court of Canada’s emphasis on “proportionality” as a controlling principle for summary judgment motions. This principle does not mean that large, complicated cases must go to trial, while small, single-issue cases should not. Nor does it mean that the “best foot forward” principle has been displaced; quite the reverse. If anything, this principle is even more important after Hryniak, because on an unsuccessful motion for summary judgment, the court will now rely on the record before it to decide what further steps will be necessary to bring the matter to a conclusion. To do this properly, the court will need to have the parties’ cases before it.
[Emphasis added]
[41] In Hryniak, the Supreme Court set out how Rule 20 is to be applied to promote timely, affordable access to civil justice. Karakatsanis J., on behalf of the Court, noted that motions for summary judgment are an opportunity to simplify pre-trial procedures and move the emphasis away from the conventional trial in favour of proportional procedures tailored to the needs of the particular case. At para. 49, she stated:
There will be no genuine issue requiring a trial when the judge is able to reach a fair and just determination on the merits on a motion for summary judgment. This will be the case when the process (1) allows the judge to make the necessary findings of fact, (2) allows the judge to apply the law to the facts, and (3) is a proportionate, more expeditious and less expensive means to achieve a just result.
[Emphasis added]
[42] Karakatsanis J. held, at para. 58, that the judge hearing a motion for summary judgment must compare the procedures available in such a motion, supplemented, if necessary, by the fact-finding tools provided by Rules 20.04(2.1) and (2.2), with those available at trial. Based on the comparison, the judge must determine whether the court can, in the motion, make the necessary findings of fact and apply the principles of law to those facts in a proportionate, expeditious, and less costly manner, to achieve a just result:
This inquiry into the interest of justice is, by its nature, comparative. Proportionality is assessed in relation to the full trial. It may require the motion judge to assess the relative efficiencies of proceeding by way of summary judgment, as opposed to trial. This would involve a comparison of, among other things, the cost and speed of both procedures. (Although summary judgment may be expensive and time consuming, as in this case, a trial may be even more expensive and slower.) It may also involve a comparison of the evidence that will be available at trial and on the motion as well as the opportunity to fairly evaluate it.
[Emphasis added]
[43] Based on the guidelines in Hryniak, I must first, relying solely on the evidence before me and without using the fact-finding powers in Rule 20.04, determine:
• whether there is a genuine issue requiring trial;
• whether I can fairly and justly adjudicate the dispute; and
• whether the motion is a timely, affordable, and proportionate procedure under Rule 20.04(2)(a).
[44] If there is no genuine issue requiring a trial, I must grant summary judgment: Hryniak, at para. 66.
[45] If there is a genuine issue requiring a trial, I must exercise my discretion to determine whether the need for a trial can be avoided by using the powers that Rules 20.04(2.1) and (2.2) give to the motion judge, provided their use will not be contrary to the interests of justice; will lead to a fair and just result; and will serve the goals of timeliness, affordability, and proportionality in the litigation as a whole: Hryniak, at para. 66.
[46] The party moving for summary judgment has the onus of establishing that there is no genuine issue of material fact requiring a trial. Once that onus is met, the burden shifts to the responding party, opposing summary judgment, to demonstrate that the claim has a “real chance of success”: Hamilton Kilty Hockey Club Inc. v. Ontario (Attorney General) (2003), 2003 24429 (ON CA), 64 O.R. (3d) 328 (C.A.), at para. 20. A self-serving affidavit will not suffice for that purpose in the absence of detailed facts and supporting evidence.
b) Applying these principles to the present case
i. Are there genuine issues of fact, or of mixed fact and law, that need to be decided by a trial?
(a) Were the posts defamatory and do they refer to Yunaland?
[47] I find, for the reasons that follow, that there is no genuine issue for trial as to whether the posts were defamatory or referred to Yunaland. The posts meet the legal test for defamation. Additionally, this Court has already made two orders, on a motion by Yunaland, unopposed by Google, declaring the reviews to be defamatory and directing the authors to remove them.
[48] In Mengarelli v. Forrest et al., 1971 510 (ON SC), [1972] 2 O.R. 397, it was held to be defamatory to accuse a trading company of maintaining workers’ cottages in an unsanitary condition. It was similarly defamatory for the authors of the reviews on Google’s platform to assert that Collett Thorpe was a fraud and that there was a police investigation against her, and that Yunaland’s playground was dirty and unpaved.
[49] As set out by the Supreme Court in Grant v. Torstar Corp., 2009 SCC 61, [2009] 3 S.C.R. 640, at para. 28, the plaintiff in an action for defamation is required to prove three things to obtain judgment and an award of damages:
(1) that the impugned words were defamatory, in the sense that they would tend to lower the plaintiff’s reputation in the eyes of a reasonable person;
(2) that the words, in fact, referred to the plaintiff; and,
(3) that the words were published, meaning that they were communicated to at least one person other than the plaintiff.
[50] If these elements are established on a balance of probabilities, falsity and damage are presumed: Grant, at para. 28.
[51] Ferguson J. in Macdonald v. Mail Printing Co. (1901), 2 O.L.R. 278 at 282-283 (C.A.) stated:
Any written words published are defamatory which impute to the plaintiff that he has been guilty of any crime, fraud, dishonesty, immorality, vice or dishonourable conduct, or has been accused or suspected of such misconduct, and so too are all words which hold the plaintiff up to contempt, hatred, scorn or ridicule, and which, by thus engendering an evil opinion of him in the minds of right thinking men, tend to deprive him of friendly intercourse and society.
See also: Cartwright J. in Douglas v. Tucker, 1951 54 (SCC), [1952] 1 S.C.R. 275 at para. 18, quoting favourably from Odgers on Libel and Slander (6th ed.) at p. 16.
[52] It is defamatory to suggest that persons should not frequent a business’ premises. In Gallant v. West (1955), 1955 747 (NL CA), 36 M.P.R. 14 (Nfld. C.A.), it was held to be defamatory for the defendant to list the plaintiff’s hotel as an “off-limits” establishment for servicemen, as this constituted an imputation discreditable to the plaintiff in the operation of his hotel. It was similarly defamatory for the authors of the reviews on Google’s platforms to assert that Yunaland’s daycare was the worst daycare ever and to warn viewers not to waste their time bringing their children there.
[53] A defamatory statement is one that has a tendency to lower the reputation of the person to whom it refers in the estimation of right-thinking members of society generally and, in particular, to cause him or her to be regarded with feelings of hatred, contempt, ridicule, fear, dislike, or disesteem: Botiuk v. Toronto Free Press Publications Ltd., 1995 60 (SCC), [1995] 3 S.C.R. 3, at para. 62; Color Your World at para. 14, as cited in Manson v. John Doe No. 1, 2013 ONSC 628, at para. 21.
[54] There is no dispute that the words posted on Google’s Local Reviews platform were defamatory and referred to Yunaland. On January 21, 2020, Van Melle J. made an Order, on a motion by Yunaland, declaring the reviews to be defamatory and directing the authors to remove them. RSJ Ricchetti made a further Order to the same effect on February 25, 2020. Google did not oppose either of those motions.
(b) Did Google publish the defamatory content?
[55] For the reasons that follow, I find that the evidence in this motion does raise a genuine issue of fact, or of mixed fact and law, as to whether Google, as an online “broadcaster,” published the content of the reviews posted on its electronic “Local Reviews” platform.
Legislative framework
The Libel and Slander Act provides, in part:
Definitions
- (1) In this Act,
“broadcasting” means the dissemination of writing, signs, signals, pictures and sounds of all kinds, intended to be received by the public either directly or through the medium of relay stations, by means of,
(a) any form of wireless radioelectric communication utilizing Hertzian waves, including radiotelegraph and radiotelephone, or
(b) cables, wires, fibre-optic linkages or laser beams,
and “broadcast” has a corresponding meaning; (“radiodiffusion ou télédiffusion”, “radiodiffuser ou télédiffuser”)
Meaning of words extended
(2) Any reference to words in this Act shall be construed as including a reference to pictures, visual images, gestures and other methods of signifying meaning.
What constitutes libel
- Defamatory words in a newspaper or in a broadcast shall be deemed to be published and to constitute libel.
[Emphasis added]
Canadian jurisprudence on the meaning of “publication”
[56] In Canada, the courts have defined “publish” broadly. Their definition has favoured plaintiffs who seek damages for defamation. Defendants, such as Google in the present case, have sought shelter from liability by arguing that, even if they published defamatory material authored by third parties, they fall within an exception that the courts have created for publishers who were acting merely as passive intermediaries in relation to such content. Google relies, in this regard, on the decision of the Supreme Court in Crookes v. Newton, [2011] 3 S.C.R. 269, 2011 SCC 47, which held that the publisher of a hyperlink was merely a passive intermediary in respect of the third-party content referenced by the hyperlink.
[57] In Porter v. Robinson Sheppard Shapiro, 2005 80694 (ON CA), 73 O.R. (3d) 560, [2005] O.J. No. 40, the Court of Appeal for Ontario set aside a summary judgment dismissing a claim based on defamation arising from the defendant law firm posting communiqués on its website, holding that the plaintiff was entitled to a trial as to whether the content of material published on the website was defamatory.
[58] In Warman v. Grosvenor, 2008 57728 (ON SC), Ratushny J. granted default judgment for libel arising from internet postings by the defendant. He stated:
[53] … Libel is any publication of defamatory material. The law presumes that the words are false unless and until the defendant proves the contrary. In order to succeed in his defamatory action, the plaintiff must establish each of the following three elements on a balance of probabilities: (1) the words are defamatory; (2) the words are published; (3) the plaintiff is the person defamed: Rainaldi, ed., Remedies in Tort, looseleaf (Toronto: Carswell, 2006), at pp. 6-20.
[55] … The Internet is a means of publication like no other, given its ability to instantaneously send words throughout the world to the millions who have access to computers. The defendant has caused defamatory words to be communicated to others by the postings and each time he has re-posted the same defamatory words in the postings, he has created a new publishing of these words.
[Emphasis added]
[59] In 2011, in Martinek v. Dojc, 2011 ONSC 3795, Wilson J., in the Divisional Court, allowed an appeal on the ground that the trial judge had failed to consider whether posting on the internet to a closed group, the membership of which required a password, constituted publication. Wilson J. wrote:
Was the material published?
[34] The law of defamation in the context of the internet is developing, and is dependent upon the facts. I conclude that there is no definitive answer to the legal question of whether the comments were published on these facts of this case, and therefore it is preferable to refer the matter to trial so that the issue can be determined upon a full factual record….
[Emphasis added]
[60] In Roy v. Ottawa Capital Area Crime Stoppers, 2018 ONSC 4207, 142 O.R. (3d) 507; 2018 ONSC 4207, C. McLeod J. stated, at para. 17, with reference to the issue of publication:
Third, the defamatory words must have been "published". This means that they were communicated to at least one person other than the plaintiff. Publication on a website would easily meet this test.
[Emphasis added]
[61] In the present case, Google argues that it was not the publisher of the defamatory content because the content was authored by the other defendants, who posted their reviews on its Local Reviews website. It relies, in this regard, on the Supreme Court of Canada’s decision in Crookes. That decision held that the defendant was not liable for defamatory content to which the defendant had provided access on its website by means of a hyperlink.
[62] For the reasons that follow, I reject this argument. I find that the Supreme Court’s decision in Crookes was based on the particular nature of hyperliinks, which do not give the host of the internet platform where they appear control over the content of the third party material to which the links provide its users access. The decision should not be applied broadly to extend immunity to publishers for third party content over which they are able to exercise control.
U.S. jurisprudence on the meaning of “publication”
[63] In the present case, Google relies on Canadian jurisprudence that excepts certain actions of interactive computer services from the Libel and Slander Act’s definition of “publisher”. In particular, it relies on the Supreme Court’s exception of a website’s publishing of a hyperlink to defamatory third-party content. Google relies on the fact that courts in Canada have protected internet broadcasters from liability for third party content to which they provide their users access by means of hyperlinks on their electronic platforms. Google argues, based on that jurisprudence, that it is immune from liability for all third party content that appears on its websites.
[64] It is illuminating to examine how U.S. jurisprudence has treated computer services that publish such hyperlinks, and whether the U.S. courts have excluded such actions from the definition of publishing that section 230 protects. Comparing the differing bases for the immunity granted to publishers of hyperlinks in the contexts of Canadian and U.S. law can help our understanding of the limits of immunity for defamation existing in Canadian law.
[65] Caution is required when applying U.S. jurisprudence in the Canadian context. Apart from the special protection that section 230 of the CDA gives to publishers in the U.S., there are differences between the approaches taken in the jurisprudence of the two countries to liability for defamation.
[66] Mitchel Drucker, in his article, “Canadian v. American Defamation Law: What Can We Learn from Hyperlinks,” in the Canada-United States Law Journal, 38 Can.-U.S. L.J. 141 (2013), 2013 Docs 792 observes that Canadian courts and U.S. courts have both extended immunity for hyperlinks, but that they have done so based on distinct legal principles arising from the different approaches that the courts in each country have taken to the law of defamation.
[67] U.S. law, in the interest of constitutionally protected free speech, gives broad statutory protection to “publishers” generally from liability for defamation. In the United States, the courts have considered the meaning of “publish” in the context of section 230(c)(1) of the Communications Decency Act 47 U.S.C. (“CDA”). The CDA was enacted in 1996 during the early days of the internet. It bars claims that seek to hold providers of “interactive computer service[s]” (e.g., websites like those of Google in the present case) liable “as the publisher or speaker” of content that the service did not create or develop. Section 230 was enacted to protect internet companies from strict liability under state law in defamation actions because they had permitted other parties to post defamatory materials on the internet companies’ websites.
[68] In order to seek redress for protect persons harmed by defamatory statements, plaintiffs’ lawyers in the U.S. have argued that “publication” should be defined narrowly so as to circumscribe the immunity publishers in the U.S. enjoy. They seek to hold internet services such as Google liable for defamatory content appearing on their platforms by arguing that when the internet services draw the attention of their users, by hyperlinks or otherwise, to material authored by third parties, they are not publishers and are therefore not entitled to publishers’ immunity. Publication, they say, requires that a defendant perform the functions traditionally performed by publishers.
[69] In the U.S., internet companies such as Google, when sued for defamation in relation to content authored by third parties, argue that they are publishers of such material, and are entitled to statutory protection as publishers, because they are performing functions “comparable” to those that have traditionally been performed by publishers.
[70] Drucker notes that in the U.S., the courts have developed the “single publication rule,” whereby only the original publication of content is actionable. He observes that by applying the single publication rule to internet platforms, the U.S. courts have extended immunity to defendants such as Google in relation to hyperlinks that have directed users to third party content containing defamatory material, in much the same way that courts in Canada have provided such immunity based on the Canadian legal principle of “innocent dissemination.”
[71] In Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1102 (9th Cir. 2009), the U.S. Court of Appeals for the Ninth Circuit, citing Fourth Circuit precedent, applied Section 230 of the CDA in ruling that Yahoo!, Inc., as an Internet service provider, could not be held responsible for failing to remove objectionable content posted to their website by a third party. The Court held that the failure to remove such content fell within the definition of “publication” in section 230 of the CDA, as “publication involves reviewing, editing, and deciding whether to publish or to withdraw from publication third party content.”
[72] When section 230 was first enacted, mere posting on “bulletin boards” and in “chat rooms” was the prevalent practice. Over the past two and a half decades, many interactive computer services have, in a variety of ways, sought to draw the attention of users, or “recommend to” them, particular materials, such as text or videos, authored by other parties. Those referrals or recommendations are implemented through automated algorithms, which select the specific material for a particular user based on information about that user that is known to the interactive computer service. The public has only recently begun to understand the prevalence and increasing sophistication of these algorithm-based recommendation practices.
[73] The expansion of internet activities in the past two and a half decades has caused some courts in the U.S. to question whether the broad interpretation given to “publication” under section 230 of the CDA, and the protection from liability that it entails in the U.S., should be reconsidered. Justice Clarence Thomas in the U.S. Supreme Court (“USSC”), writing in respect of the denial of certiorari in Malwarebytes, Inc. v. Enigma Software Group USA, LLC, 141 S. Ct. 13 (2020), observed that, “[In] the ... years since [its enactment], many courts have construed the law broadly to confer sweeping immunity on some of the largest companies in the world.”
[74] The U.S. Court of Appeals for the Ninth Circuit, in Reynaldo Gonzalez, et al. v. Google LLC, 2 F.4th 871 considered whether an interactive computer service provider is a “publisher,” entitled to protection from liability under section 230 of the CDA, in a claim arising from third party content. The plaintiffs were the estate and family members of Nohemi Gonzalez, an American woman killed during a November 2015 terrorist attack in Paris, France. The Islamic State, also known as ISIS, claimed responsibility. In 2016, the plaintiffs sued Google in the Northern District of California. They alleged that, by operating YouTube, Google incurred liability under the Anti-Terrorism Act (ATA), 18 U.S.C. § 2333, and committed or abetted “an act of international terrorism” that caused Ms. Gonzalez’s death.
[75] Google moved successfully in the Ninth Circuit Court, pursuant to Rule 12(b)(6) of the Federal Rules of Civil Procedure, to dismiss the plaintiffs’ claims on the ground that their allegations failed to state, “a claim on which relief can be granted.” The plaintiffs petitioned the U.S. Court of Appeals for the Ninth Circuit for certiorari quashing the order dismissing their claim. The Court of Appeals dismissed their petition, and the plaintiffs have now appealed from that dismissal to the USSC, which is slated to hear the appeal in its current session.
[76] The respondent in Gonzalez alleges that the petitioners’ claim “seeks to treat an interactive computer service provider as a ‘publisher,’ and is thus barred by section 230, when the claim targets the provider’s display of third-party content of potential interest to individual users.” Gonzalez is the third recent federal court of appeals decision that has considered the scope of section 230’s definition of protected “publications,” as applied to “recommendations” of content created by other parties: Gonzalez, 36a-44a, 81a-92a, 92a-110a; Dyroff v. Ultimate Software Group, Inc., 934 F.3d 1093, 1098 (9th Cir. 2019), cert. denied, 140 S. Ct. 2761 (2020); and Force v. Facebook, Inc., 934 F.3d 53, 65 (2d Cir. 2019), cert. denied, 140 S. Ct. 2761 (2020).
[77] In each of those appeals, the majority of the appeals court held that an interactive computer service is entitled to the same protection under section 230 when the service affirmatively recommends other party materials as when the service merely permits another party to post that material on the service’s website. Two dissenting opinions and one concurring opinion in the appeals rejected this broad interpretation of section 230.
[78] As noted above, Google relies heavily, in the present case, on the Supreme Court’s treatment of hyperlinks in Crookes. I will address that decision later in these reasons. It is useful, however, at this point, to note the difference in the way Canadian jurisprudence and U.S. jurisprudence approach the issue of hyperlinks.
[79] Hyperlinks refer viewers of content on one electronic platform to content on another platform. They materially differ from the reviews posted on Google’s platform in the present case, in that the publisher of content containing a hyperlink has no control over the hyperlinked content and cannot change or remove such content.
[80] In Drucker’s article, the author notes, at pps. 152 to 166, the seeming convergence of the treatment of hyperlinks by the Supreme Court in Crookes and by the U.S. Court of Appeals for the Third Circuit in In Re: Philadelphia Newspapers, 690 F.3d 161 (3d Cir. 2012). He adds that this apparent convergence has occurred notwithstanding the more “plaintiff-friendly” defamation law in Canada, partly attributable to the “single publication rule” in U.S. jurisprudence, which limits liability arising from the re-publication of defamatory content.
[81] Drucker explains, at p. 159, how the different approaches to the single publication rule in the two countries converged in the treatment of hyperlinks in the two cases:
Both Crookes and Newspapers stand for the general proposition that hyperlinks cannot serve as the basis for the tort of defamation. In Newspapers, the Inquirer was found not liable for posting a hyperlink to allegedly defamatory content it had published in the past. Similarly, in Crookes, Mr. Newton was found not liable for posting hyperlinks to allegedly defamatory content created by third parties.
While these holdings with regard to hyperlinks are facially similar, the precise factual underpinnings do differ somewhat. In Newspapers, the plaintiffs’ allegation of defamation was rooted in the defendant's hyperlink to a prior article that was also written by the defendant. Thus, the precise question there was whether a hyperlink constituted a republication of content that had already been published by the defendant. Comparatively, in Crookes, the plaintiff was attempting to hold the defendant liable for hyperlinking to content created by another author. As such, the precise question in Crookes was not whether a hyperlink constituted republication of a defendant's already published content. Rather, the Crookes court was considering whether a defendant can be liable for hyperlinking to content that he did not create. Despite these factual differences, the general premise emerging from each case remains the same: a hyperlink is not an adequate basis on which to hold a defendant liable for the publication of allegedly defamatory content.
[Emphasis added]
[82] The Gonzalez case involves applying section 230 to YouTube’s selection and arrangement of third-party content to display to users what the petitioners call “targeted recommendations.” The principal issues in the case currently before the USSC are:
(a) whether the petitioners are entitled to have the highest Court review the Court of Appeals’ refusal to hear their application for certiorari based on a conflict between the precedents in different circuits on the issue of the liability of an electronic platform for third-party content posted thereon, and,
(b) particularly, whether the host of the platform is a “publisher” by virtue of having performed a traditional editorial function.
[83] The plaintiffs in Gonzalez rely on the inconsistency between the traditional editorial functions standard, accepted in most circuits, and the decisions from the Ninth and Second Circuits, holding that a referral, or recommendation, is also protected by section 230. Judge Berzon, in Dyroff, expressly held that recommendations are outside the scope of the traditional standard. Endorsing Judge Katzman’s opinion in Force, which made that distinction, she stated:
… , if not bound by Circuit precedent I would hold that the term ‘publisher’ under section 230 reaches only traditional activities of publication and distribution.... [T]argeted recommendations ... are well outside the scope of traditional publication.” pps. 81a-82a.
[84] The standing of the petitioners to have the USSC hear their appeal in Gonzalez will depend largely on whether that Court finds that there is a conflict in the decisions of federal appeal courts in different circuits. To support their position that such a conflict exists, the petitioners argue that when the USSC denied certiorari in Dyroff, the respondent, Google, insisted that there was no conflict between precedents in different circuits because no circuit had applied the “traditional editorial function test,” employed in U.S. jurisprudence in the interpretation of “publisher” under section 230 of the CDA, whereas in Gonzalez, Google argue that there is no circuit conflict because all circuits apply that test. See: Brief in Opposition, Dyroff, pps. 2, 26-30, available at 2020 WL 1486537; Brief in Opposition, Gonzalez, pps. 11-12.
[85] The petitioners in Gonzalez rely on the two dissenting opinions and one concurring opinion in Force and Dyroff to raise the broader question of whether the protection in section 230(c)(1) of the CDA applies to claims against an interactive computer service for displaying recommended content, which the petitioners say is not the action of a publisher and is not entitled to section 230 protection. Because this issue concerns only “targeted recommendations,” the respondent argues that even the Ninth and Second circuit courts agree that section 230 bars claims that seek to hold websites liable for activities that publishers traditionally perform, like selecting, editing, and disseminating third-party content. Additionally, Google argues, in Gonzalez, that a claim treats an online service as a “publisher” within the meaning of section 230 when the claim targets the service’s curation and display of third-party content of potential interest to each user. It cites the majority judgment in Force, which held that “[A]ctively bringing [a speaker’s] message to interested parties . . . falls within the heartland of what it means to be the ‘publisher’ of information.” Force, 934 F.3d at p. 65 (citation omitted).
[86] The petitioners in Gonzalez (at pps. 29-30 of their brief) argue that actively bringing material with third-party content to the attention of users creates new content by implicitly informing users how to access that material — even by providing hyperlinks to it. They write, at p. 29 of their brief:
Equally importantly, as Judges Katzmann, Berzon and Gould all pointed out, recommendations by an interactive computer service (of other-party content or anything else) are communications by and from service itself, not by and from some other party. “[R]ecommendation[s] ... involve communication by the service provider, and so are activities independent of simply providing the public with content supplied by others.” (Berzon, J., concurring); see 105a-106a (Gould, J.); Force, 934 F.3d at 82-83 (Katzmann, J.). The suggestion that a user access some text, image, or other content may be made in so many words (as are some recommendations on Facebook), or may simply take the form of hypertext or a hyperimage chosen to interest a particular user and displayed before a user by the interactive computer service. A sentence such as “To see this video, click here” conveys “information provided by” the interactive computer service (how to access a particular video), not information provided by whoever created the video itself.
[Emphasis added]
[87] Google, as respondent in Gonzalez, argues (at p. 22 of their brief), that every publication tells a reader how to access content — with such instructions as “click here” or “read on” — and implicitly represents that the content may be worth reading and that if that suffices to support liability, section 230 would be a “dead letter,” meaning that it would provide no meaningful protection from liability. There is a dispute between the parties in Gonzalez as to whether a defendant is liable, and not protected by the CDA, for hyperlinked content. The Supreme Court in Crookes held that such content does not attract liability.
[88] In Force, Judge Katzmann wrote a lengthy dissent, objecting to “how far we have strayed from the path on which Congress set us out....” 934 F.3d at 77; see 934 F.3d at pps. 76-89. Extending the protections of section 230 to recommendations, he argued, was inconsistent with the text of the statute, which applies only to claims that seek to treat a defendant as a “publisher.” 934 F.3d at pps. 77, 80-81. The majority opinion, Judge Katzmann pointed out, was inconsistent with precedents in other circuits that limited the application of section 230 to a publisher’s “traditional editorial functions.” 934 F.3d at p. 81. A defendant’s recommendation, he reasoned, conveyed a message from the defendant itself, and thus was not merely publishing content treated by another party. 934 F.3d at pps. 82-83.
[89] In his dissenting opinion in Force, Judge Katzmann concluded that extending section 230 protection to targeted recommendations by interactive computer services was inconsistent with the prevailing interpretation of section 230:
[O]ur precedent does not grant publishers CDA immunity for the full range of activities in which they might engage. Rather, it “bars lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content” provided by another for publication. 934 F.3d at 81 (quoting FTC v. LeadClick Media, LLC, 838 F.3d 158, 174 (2d Cir. 2016)). Judge Katzman cited decisions in the First, Third, Fourth, Sixth, Tenth, and District of Columbia Circuits that have adopted that traditional editorial functions standard. The recommendation practices in Force, Judge Katzmann pointed out, “[went] far beyond and differ[ed] in kind from traditional editorial functions.” 934 F.3d at p. 82, citations omitted.
[90] Judge Katzmann warned that, “mounting evidence suggests that providers designed their algorithms to drive users toward content and people the users agreed with — and that they have done it too well, nudging susceptible souls ever further down dark paths.” 934 F.3d 87-88 (noting that algorithm-based recommendations have been “a real boon” for extremist groups).
[91] The protections of section 230 are expressly limited to claims which “treated [an interactive computer service] as the publisher” of content created by others. Prior to the 2019 decisions in Force and Dyroff, the courts of appeals in the U.S. had generally agreed that a claim “treated [a defendant] as a publisher” of other-party created content when it would impose liability on that defendant because the defendant had engaged in a publisher’s traditional editorial function, such as deciding to publish or edit content created by others.
[92] In Dyroff, the defendant’s recommendations included an email which it had sent to a user, notifying him that another party had posted new material on the defendant’s website, and informing the user of several ways he could access that material. The Ninth Circuit held “[b]y recommending user groups and sending email notifications, [the defendant] ... was acting as a publisher of others’ content.” 934 F.3d at p. 1098. Thus, to hold the defendant liable for making such recommendations would “inherently require the court to treat the defendant as the ‘publisher or speaker’ of content provided by another.” Id. (quoting Barnes v. Yahoo!, Inc., 570 F.3d 1096, p. 1102 (9th Cir. 2009)).
[93] In Dyroff, the three judges in the Court of Appeals differed mainly as to whether recommendations are within the scope of section 230’s protection. Judge Christen’s majority opinion noted that “[t]he Gonzalez Plaintiffs’ theory of liability generally arises from Google’s recommendations of content to users.” p. 7a. He concluded that under the Ninth Circuit precedent, a recommendation is protected by section 230, provided the defendant’s method for making recommendations (in this case, algorithms) did not treat harmful other-party content “differently than other other-party-created content.” pps. 36a-44a. An algorithm-based recommendation system, Judge Christen reasoned, involved the “same” “core principle” as a “traditional search engine.” pps. 38a and 41a.
[94] Judge Christen explained that the panel could not adopt the interpretation Judge Katzmann had given of section 230 in the Force case because “Ninth Circuit case law forecloses his argument.” p. 44a. Judge Berzon, in a concurring opinion, argued at length that Judge Katzmann’s construction of section 230 was the correct one. pps. 81a-92a. Adopting Judge Katzmann’s opinion that section 230 protects only traditional editorial functions, Judge Berzon stated:
[F]or the reasons compellingly given by Judge Katzmann in his partial dissent in Force ..., if not bound by Circuit precedent, I would hold that the term “publisher” under section 230 reaches only traditional activities of publication and distribution—such as deciding whether to publish, withdraw, or alter content—and does not include activities that promote or recommend content.... p. 82a.
Targeted recommendations and affirmative promotion of connections and interactions ... are well outside the scope of traditional publication. p. 84a.[^1]
[95] Judge Gould dissented from the majority’s opinion that section 230 protects recommendations made by an interactive computer service. p. 96a-110a. He distinguished between YouTube’s action in merely permitting an author (in that case, ISIS) to upload its videos to the YouTube server, and YouTube’s use of recommendations to encourage viewing of ISIS videos by “those already determined to be most susceptible to the ISIS cause.” p. 102a. Judge Gould expressly endorsed Judge Katzmann’s dissenting reasons in Force (p. 98a), and even appended the entirety of those reasons to his own opinion. pps. 139a-169a.[^2]
[96] In October 2020 in Malwarebytes, Justice Clarence Thomas in the USSC set out the prevailing interpretation of section 230, which protects traditional editorial functions:
“[F]rom the beginning, courts have held that § 230(c)(1) protects the ‘exercise of a publisher’s traditional educational functions— such as deciding whether to publish, withdraw, postpone or alter content’” Malwarebytes, 141 S.Ct. at p. 16) (emphasis omitted) (quoting Zeran, 129 F.3d at 330).
[97] The majority opinion in Gonzalez does not dispute Judge Katzmann’s view that other circuits apply the traditional editorial functions test in determining what activities are protected by section 230. It also does not dispute Judge Berzon’s assertion that making recommendations is not a traditional publisher’s function. Instead, the controlling legal issue in the Ninth Circuit is not whether a targeted recommendation is a traditional editorial function, but whether it is comparable to a “traditional search engine” (emphasis added). 38a, 41a.
[98] In Malwarebytes, Justice Thomas expressed concern that “[c]ourts have long stressed non-textual arguments when interpreting § 230, leaving questionable precedent in their wake.” 141 S. Ct. at 14. Judge Gould shared that view, explaining, “I agree with Justice Thomas that Section 230 has mutated beyond the specific legal backdrop from which it developed, and I cannot join a majority opinion that seeks to extend this sweeping immunity further.” p. 110a, n.9.
[99] It is common for an interactive computer service to both permit another party to post content on the service’s servers, and also to recommend that content. The purpose of those recommendations is to induce users to visit other content on the service’s own website, thus enabling the interactive computer service to earn additional advertising revenue. But the petitioners in Gonzalez argue that posting of other-party content, and the recommending of it, are different acts; only insofar as it permits the posting (and engages in related traditional editorial functions) is the interactive computer service acting as a publisher. The recommendations at issue in Dyroff, for example, included an email — written by the defendant itself — notifying a user that new material had been posted onto the defendant’s website. Drafting and sending that email were different actions, with different status under section 230, than permitting another party to post the material at issue on the defendant’s website.
[100] As Judges Katzmann, Berzon, and Gould all pointed out in Dyroff, recommendations by an interactive computer service (of other-party content or anything else) are communications by and from the service itself, not by and from some other party. “[R]ecommendation[s] ... involve communication by the service provider, and so are activities independent of simply providing the public with content supplied by others.” (Berzon, J., concurring); see 105a-106a (Gould, J.); Force, 934 F.3d at pps. 82-83 (Katzmann, J.).
[101] The suggestion that a user access some text, image, or other content may be made in so many words (as are some recommendations on Facebook), or may simply take the form of a hyperlink, hypertext or hyperimage, chosen to interest a particular user and displayed to a user by the interactive computer service. A sentence such as, “To see this video, click here” conveys “information provided by” the interactive computer service as to how to access a particular video, not information provided by whoever created the video itself.
[102] The majority of the Court of Appeals in Gonzalez reasoned that targeted recommendations directed at a particular user are protected by section 230 because they are essentially the same as search engines. This is a new explanation for why recommendations are protected by section 230, different from the reasoning in the earlier decisions in Dyroff and Force:
This [YouTube] system is certainly more sophisticated than a traditional search engine, which requires users to type in textual queries, but the core principle is the same: Google’s algorithms select the particular content provided to a user based on that user’s inputs. See Roommates, 521 F.3d at 1175 (observing that search engines are immune under § 230 because they provide content in response to a user’s queries....).” p. 38a.
[103] The petitioners in Gonzalez argue that the majority decision of the Court of Appeal in that case has two fatal flaws. The majority acknowledged that “[t]here is no question § 230(c)(1)” — as interpreted by the panel and other courts — “shelters more activity than Congress envisioned it would.” p.80a.
[104] First, the petitioners argue, section 230 applies only when a claim would, in effect, treat the interactive computer service as the “publisher” of content provided by another, not when it would treat the service as a “search engine.” Whatever the state of Ninth Circuit precedent regarding search engines, proffering an analogy between recommendations and search engines does not connect the court’s holding to the actual text of the statute.
[105] Second, although a company providing a search engine would at least usually be an interactive computer service, section 230 does not “immun[ize]” interactive computer services. Section 230 does not apply to everything an interactive computer service (including a search engine) does but accords protection only insofar as a particular complaint seeks to treat that service as a publisher. The court below argued that both search engines and algorithm-based recommendations involve matching (the former matching responses with a user’s query, the latter matching suggestions with information the interactive computer service has about the user) (p. 38a), but that does not establish that all (or even any) uses of matching constitute publishing.
[106] The text of section 230 distinguishes between a system that provides, to a user, information that the user is actually seeking (as does a search engine), and a system utilized by an internet company to direct at a user information (such as a recommendation) that the company wants the user to have. Section 230(b) states that “[i]t is the policy of the United States ... to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the internet and other computer services.” 47 U.S.C. § 230(b) (emphasis added).
[107] The petitioners in Gonzalez argue that Congress found that “[t]he developing array of Internet and other interactive computer services ... offers users a greater degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.” 47 U.S.C. § 230(a) (emphasis added). The core function of a search engine advances that policy because it enables a user to select what information he or she will receive; on the other hand, when an interactive computer service makes a recommendation to a user, it is the service, not the user, that determines that the user will receive that recommendation.
[108] The majority opinion in Dyroff took a different approach, one which never refers to search engines. The justification Dyroff offered for immunizing recommendations (as well as email notifications about third party content) consists of three somewhat enigmatic sentences:
By recommending user groups and sending email notifications, [the interactive computer service] ... was acting as a publisher of others’ content. These functions — recommendations and notifications — are tools meant to facilitate the communication and content of others. They are not content in and of themselves.
934 F.3d at p. 1098.
[109] The petitioners in Gonzalez argue that this analysis cannot be reconciled with the text of the statute, or with ordinary English. The first sentence is the operative one, but is difficult to understand. One who recommends or sends a notification about material created by another is not, by so doing, ipso facto “acting as publisher” of that material, at least as those words are ordinarily understood. If a member of this Court were to comment “John Grisham’s latest novel is terrific,” or send an email announcing that “Maria Yovanovitch’s new book is in stock at Politics and Prose,” he or she would not by so doing be transformed into the publisher of either book.
[110] Because recommendations and notifications are “meant to facilitate the communication and content of others,” they can only be understood and function solely as an ancillary part of the process of the publication of that other-created content. But in ordinary English, a person who takes actions “meant to facilitate the communication and content of others” would not usually be described, in the words of section 230, as the “publisher” of that content.
[111] The complaint in Force alleged that Facebook made two types of recommendations. Facebook both recommended content and recommended “friends,” who could be individuals or groups. 934 F.3d at p. 65. If a user opted to “friend” a person or group, the user typically would then receive matter related to content which that person or group had put in a public posting. The plaintiff in Force alleged that the result of these recommendations was to connect users with terrorists or terrorist groups, leading them to join or support Hamas, a terrorist organization which had killed the plaintiffs’ relatives. 934 F.3d at pps. 57, 65.
[112] The majority in Force reasoned that recommendations are protected by section 230 because the type of consequences of the recommendations about which the plaintiffs complained, connecting users to individuals, organizations, or materials, were also consequences of publishing.
[A]rranging and distributing third-party information inherently forms “connections” and “matches” among speakers, content, and viewers of content, whether in interactive internet forums or in more traditional media. That is an essential result of publishing. Accepting plaintiffs’ argument would eviscerate Section 230(c)(1); a defendant interactive computer service would be ineligible for Section 230(c)(1) immunity by virtue of simply organizing and displaying content exclusively provided by third parties. 934 F.3d at 66 (footnote omitted).
[113] The petitioners in Gonzalez argue that this analysis has several clear flaws. First, the text of section 230 applies only to claims that seek to treat an interactive computer service “as a publisher,” not more broadly to claims that seek to treat an interactive computer service as an entity that brought about “an essential result of publishing.” Many individuals and organizations bring about such results whom no one would call a publisher. A skilled librarian brings about a connection between a patron and a book. But the librarian is not a “publisher.” Second, the terms of section 230 expressly apply only to transmitting to a user the actual “information provided by another information content provider.” An interactive computer service that instead gives a user a link to, or recommendation or suggestion about, that other-party content is outside the literal terms of the statute.
[114] The petitioners in Gonzalez argue that the majority of the Court of Appeals had insisted it was holding only that recommendations by an interactive computer service are protected by section 230 if those recommendations are made in a “neutral” manner. “We only reiterate that a website’s use of content-neutral algorithms, without more, does not expose it to liability....” p. 41a. This reasoning of the Court of Appeals is consistent with that of the Supreme Court of Canada in Crookes.
[115] The petitioners in Gonzalez note that, “[The complaint does not] allege that Google’s algorithms treated ISIS-created content differently than any other third-party created content.” Id. The Second Circuit majority in Force also stressed that the recommendations there were formulated in a neutral manner. 934 Fed. 3d at pps. 69-70. But if making recommendations falls within the functions of a “publisher” under section 230, there would be no basis for distinguishing between neutrally formulated and deliberately pro-terrorist recommendations.
[116] According to the petitioners in Gonzalez, the core consequence of a claim treating a defendant as a “publisher” of content created by another is that the defendant is protected from liability when it decides whether or not to publish that content. Under the terms of section 230, YouTube would be protected if it chose to widely distribute a favorable review of ISIS videos that was taken from a terrorist publication but refused to permit the U.S. Department of Defense to upload an analysis condemning those videos.
[117] The petitioners in Gonzalez note that this was the problem in Sikhs for Justice, Inc. v. Facebook, 697 Fed. Appx. 526 (9th Cir. 2017), discussed in Malwarebytes at p. 17 (statement of Justice Thomas). The plaintiff in that case sought to place on its Facebook page materials strongly critical of the role of Indian Prime Minister Narendra Modi in condoning the 2002 massacre of hundreds of Muslims in riots in Gujarat. Facebook removed that criticism of the Prime Minister from the plaintiff’s Facebook page in India, although not elsewhere in the world, an action evidently intended to curry favor with the Indian government.
[118] When Sikhs for Justice sought an injunction to restore those materials to its Facebook page, Facebook successfully argued that section 230 gave it an absolute right to censor such anti-terrorist materials. The petitioners in Gonzalez argue that there is no textual basis for distinguishing between non-neutral posting policies and non-neutral recommendation algorithms, and no justification for distinguishing between the murder of Muslims in India and the murder of Nohemi Gonzalez in France.
[119] Canada has not enacted legislation like section 230 of the CDA in the U.S., protecting interactive computer services when they perform functions that publishers have traditionally performed, involving selecting, editing, and disseminating third-party content. Instead, the Libel and Slander Act of Ontario imposes liability, subject to certain exceptions, on such services when they disseminate defamatory third-party content.
[120] It is evident from the arguments of the litigants in Gonzalez that they are seeking the USSC’s clarification of the precise demarcation between a website that directs the attention of viewers to third party content in a way that is comparable to the functions traditionally performed by a publisher, and that attracts liability, and a website that merely identifies the location of such content, such as by means of a hyperlink, which does not. As will be noted below, the Supreme Court of Canada in Crookes has provided that clarification for courts in Canada.
(c) Was the defamation actionable? That is, was notice given and, if so, did Google comply with the Act?
[121] “Defamation,” for the most part, “is a strict liability tort.” See Foisy J. in Christie v. Geiger (1984), 1984 1252 (AB KB), 35 Alta L.R. (2d) 316 (Q.B.) at para. 22. It has been accepted by the courts that, apart from the possible exception of the issue of publication, the innocence, good faith, motive, belief, reasonableness, or intention of the defendant is generally irrelevant to the question of liability.
[122] Google asserts that users post millions of reviews on its platform daily, which add up to 7 billion reviews each year, making it impossible for it to monitor the content of those posts. Additionally, it asserts that the legal definition of what is defamatory varies from place to place, which adds to the impracticability of monitoring the posts for defamatory content.
[123] It is not the number of posts that are made to Google’s platform that is an issue in this case but, rather, the number of complaints it receives concerning allegedly defamatory content. Evidence of the volume of complaints Google receives is not before me. This fact, and the specifics of what would be required for Google to assess the content of the posts which gave rise to the complaints to determine whether they were defamatory, is relevant to a determination as to whether Google should continue to be regarded as a passive intermediary once it has been notified of allegedly defamatory posts.
[124] All Canadian provinces have enacted statutory limitations on the recovery of damages where a libel was published innocently or in good faith and there was a prompt retraction. The Libel and Slander Act provides:
Notice of action
5 (1) No action for libel in a newspaper or in a broadcast lies unless the plaintiff has, within six weeks after the alleged libel has come to the plaintiff’s knowledge, given to the defendant notice in writing, specifying the matter complained of, which shall be served in the same manner as a statement of claim or by delivering it to a grown-up person at the chief office of the defendant.
Where plaintiff to recover only actual damages
(2) The plaintiff shall recover only actual damages if it appears on the trial,
(a) that the alleged libel was published in good faith;
(b) that the alleged libel did not involve a criminal charge;
(c) that the publication of the alleged libel took place in mistake or misapprehension of the facts; and
(d) that a full and fair retraction of any matter therein alleged to be erroneous,
(i) was published either in the next regular issue of the newspaper or in any regular issue thereof published within three days after the receipt of the notice mentioned in subsection (1) and was so published in as conspicuous a place and type as was the alleged libel, or
(ii) was broadcast either within a reasonable time or within three days after the receipt of the notice mentioned in subsection (1) and was so broadcast as conspicuously as was the alleged libel.
[Emphasis added]
[125] There is a genuine issue of fact as to when Google was notified of the allegedly defamatory content of the reviews posted on its platforms. Google submits that any defamation that was contained in the reviews was not actionable unless and until Yunaland complied with s. 5(1) of the Libel and Slander Act by giving it notice of the specific words complained of.
[126] Google argues that it first learned of Yunaland’s allegation that the Local Reviews at issue were defamatory after this action was begun, when it received Yunaland’s Statement of Claim. Yunaland denies this assertion and relies on the notice it served earlier on Google Canada and on Google at its offices in the United States.
[127] As stated in paragraph 13, above, Yunaland served written notice to Google on April 16, 2019, under s. 5(1) of the Libel and Slander Act of Ontario, at the Toronto offices of Google Canada Corporation, a subsidiary of Google. It served Google at its U.S. head office on April 25, 2019. Yunaland therefore alleges that Google had knowledge of their complaints and the alleged defamation since the earlier of those two dates.
[128] Yunaland asserts that beginning when Google received its written notice, Google had knowledge of the defamatory reviews, and of the fact that Yunaland did not know any of the persons, real or fictitious, who suddenly put negative reviews on their profile on Google’s platform.
[129] Yunaland alleges that even after Google received the Statement of Claim in the United States by means of service pursuant to the terms of the Hague Service Convention, which took several months, it still refused to remove the defamatory words without a court order. They submit that Google continued to publish the defamatory content even after it received Yunaland’s Statement of Claim, until RSJ Ricchetti made his Order on February 25, 2020, declaring the content of the reviews to be defamatory and directing the authors to remove them.
[130] The issues of when Google first received notice of the alleged defamation, and what steps, if any, it took at that time to examine or remove the content, are key issues of fact on which the outcome of the action may turn. I do not agree with Google that these issues are readily resolved in this motion on the evidence before me.
[131] Google asserts that the notices of libel that Google received at the office of its Canadian subsidiary on April 16, 2019, did not assert that Yunaland were frauds or were being investigated by the police. Rather, they were generic libel notices that simply notified Google that it would be sued.
[132] Google further asserts that it did not receive the texts of the posts until it received Yunaland’s Statement of Claim and that on reviewing the texts at that time, it was unable to determine whether the contents of the posts were true or false or whether they were defamatory. It states that the posts, on their face, appeared to be genuine, albeit negative, consumer reviews. Google therefore advised Yunaland to obtain a determination from the Court against the authors of the posts that the posts were defamatory and requiring the authors to remove them. After such a determination, Google would then voluntarily consider removing the posts.
[133] According to Google’s own policies, the authors of reviews appearing on its platform should not impersonate others or have a conflict of interest, as is alleged to be the case here, where Yunaland asserts that the reviews were not posted in good faith with the intent of providing an accurate account of the services provided. Additionally, according to Google’s policies, reviews ought not contain fake content, as is alleged to be the case here, where Yunaland asserts that reviews were authored by persons, fictitious or real, who had never been customers of Yunaland.
[134] Yunaland alleges that some of the individuals who posted defamatory content were fictitious. Google acknowledges that it does not have the capacity to ascertain the identity of those who open accounts and post content on its platform, or to determine whether those account holders are genuine. Evidence is required as to what measures would be necessary to give Google this capacity, and as to whether it would be practicable for Google to adopt them, and as to whether this would provide better protection against reckless or malicious posts.
[135] Google submits that the posts that gave rise to Yunaland’s complaints, on their face, were not so clearly defamatory as to enable Google to make that determination without a finding by the Court that the content was defamatory.
[136] Google acknowledges that there is no evidence before me as to whether it had the capacity to ask the entity that posted a review for evidence of the accuracy of its content. There is a genuine issue of fact as to whether, if Google received a complaint that a post falsely claimed that Yunaland was being called a fraud and was being investigated by the police, or that its playground was not paved, it was capable of ascertaining the accuracy of those assertions.
[137] Google relies on McSweeney J.’s endorsement dated September 19, 2019, in Yunaland’s motion for default judgment against the defendants Boakye and Francis only. In a motion for default judgment, the Court, pursuant to Rule 19.06, assumes the allegations in the Statement of Claim to be correct. Google argues that McSweeney J.’s decision that Yunaland’s material was insufficient for her to make a declaration that the defendants’ posts to be defamatory is evidence that Google could not reasonably have ascertained that the posts, as described in Yunaland’s Statement of Claim, were defamatory.
[138] McSweeney J.’s endorsement on September 19, 2019, stated, in part:
This matter has been returned on a regular short motions list. The materials filed do not enable the court to grant the relief requested. Per rule 19.96 a plaintiff is not entitled to judgment unless the facts, deemed admitted, entitle plaintiff to judgment. I cannot make that assessment on this record. Further, the damages sought are unliquidated and must be proven. The motion is therefore not granted. I direct that this matter of default judgment, as against the defendants Boakye and Francis only, shall be dealt with by way of an uncontested trial.
…I am advised by counsel that Google defendants are prepared to remove the allegedly defamatory postings upon receipt of an order declaring such postings “defamatory”. As found earlier I cannot make that order today. However the order made in this endorsement is made without prejudice to the plaintiffs’ ability to seek such interim or other injunctive relief as they may wish with respect to removal of the impugned postings from public view by Google defendants. Such relief may be sought at any time pursuant to the rules and practice of this court.
[Emphasis added]
[139] Rule 19.02(1)(a) applies to allegations of fact made in the statement of claim, not to conclusions of law or mixed law and fact: Paul’s Transport Inc. v. Immediate Logistics Limited, 2022 ONCA 573, para. 80. McSweeney J. was therefore required, on Yunaland’s motion for default judgment, to apply the law to the facts alleged in the Statement of Claim and deemed to be admitted pursuant to r. 19.02(1), and to determine whether the reviews were defamatory.
[140] Having reviewed the motion record that was before McSweeney J., it appears that the reason she was unable to make the order Yunaland had requested was that the Statement of Claim that was attached to the supporting affidavit in the motion record omitted the initial paragraphs of the Claim and reproduced only paragraphs 41 and following of it. In any event, I do not interpret McSweeney J.’s endorsement as a finding that the content of the posts, as reproduced in the Statement of Claim, were not defamatory.
[141] As noted above, Rule 19.02(1)(a) provides that where no Statement of Defence has been delivered, the defendant is deemed to admit the allegations in the Statement of Claim. However, Rule 19.06 provides that a plaintiff is not entitled to judgment on a motion for judgment merely because the facts alleged in the statement of claim are deemed to be admitted, unless those facts entitle the plaintiff to judgment. The excerpt of the Statement of Claim that was put before McSweney J. omitted the very paragraphs that contained the defendants’ defamatory words. As a result, Yunaland was not entitled to judgment based on the material filed.
[142] The omitted portion of the Statement of Claim contained the following key facts:
- Audrey [Boakye] published defamatory post on Google listing of Yunaland on or around March 12, 2019, under the identification of “Drina Yaa Agyeiwaah,” which is the same as her email address, attacking both Yunaland and Collett, as detailed herein.
This is the worst daycare center ever. Don’t waste your time bringing your kids here. The owner is very unorganized. The playground is tiny unpaved and dirty during the summer time. Owner Collette is rude and extremely unprofessional and a fraud. She’s currently being investigated by the police. Take your business elsewhere. Don’t waste your time here.
The defamatory post of Audrey contained false information about Yunaland and Collett.
Yunaland or Collett have never committed fraud and there has been no investigation ever conducted by Police on either Collett or Yunaland.
Collett sent an email to Audrey on March 15, 2019, stating that Audrey had committed defamation and asking Audrey to remove post within 24 hours.
Audrey only edited the post, but did not remove her publication from Google.
Her present post on Google which is still defamatory is as follows:
A month ago
This is the worst daycare ever. Don’t waste your time bringing your kids here. The owner is very unorganized. The playground is not tidy, unpaved and dirty during the Summer time, etc.
- The publication by Audrey still contain false information as the ground of Yunaland is not dirty or unpaved. The photograph of playground of Yunaland is as below [photo follows]
[Emphasis added, to highlight the facts setting out the defamatory statements the defendants were alleged to have published]
[143] Instead of making the declaration that the reviews were defamatory and ordering the defendants in default to remove them, McSweeney J. ordered an uncontested trial of that claim, without prejudice to Yunaland’s right to seek that declaration and injunctive relief, either on an interim or other basis, pursuant to the rules and practice directions. Yunaland thereupon filed a new motion supported by material containing the portions of the Statement of Claim that had previously been omitted. Based on the facts alleged in the previously omitted paragraphs, and with Google’s consent, Van Melle J. made an Order on January 21, 2020, with the declaration that Yunaland had requested and on February 25, 2020, RSJ Ricchetti made a similar Order, declaring the posts to be defamatory and requiring the authors of the posts to remove them. Google then, on receiving those Orders, voluntarily removed the posts around that time.
[144] If Google received notice of the allegedly defamatory content of the defendants’ reviews, pursuant to s. 5(1) of the Act, the evidence raises a genuine issue of fact as to whether the allegedly fictitious reviews were published in good faith or in mistake or misapprehension of the facts within the meaning of s. 5(2) of the Act.
[145] Google acknowledges that there is no evidence before the Court as to whether it ever asked the authors of the posts whether there was any evidence confirming the content’s accuracy. While this is clearly evidence that Google would know best, Google submits that the Court should not draw an adverse inference from its failure to provide such evidence, as it says that it was the obligation of Yunaland to adduce such evidence from it in its cross-examination of Google’s witnesses.
[146] I am not persuaded by Google’s argument that it was unable to determine whether the reviews were defamatory until the Court made an Order declaring them to be so. There is at least a genuine issue of fact, or of mixed fact and law, as to whether the reviews, on their face, were defamatory and whether Google, upon receiving notice of them, was under a legal obligation to remove them.
[147] These issues require a full discovery. It would undermine the objective of achieving a just outcome to determine the action at a motion for summary judgment without those issues having been fully addressed on discovery.
(d) Was Google a passive intermediary?
[148] Yunaland acknowledges that, in Canada, passive intermediaries are not publishers of content authored by others and bear no liability for defamation of which they have no knowledge. The issue in the present is whether Google was, in fact, a passive intermediary at the material time.
[149] Yunaland asserts that Google ceased to be a passive intermediary when it took no action after Yunaland notified it of the defamatory content on its platform, either to examine or remove the reviews.
[150] Google disputes that if a passive intermediary is notified of defamatory content under its control, and does nothing to assess or remove it, the intermediary ceases to be passive and becomes an active participant in the defamation, bearing liability as such. Google relies on the decision of the Supreme Court in Crookes. However, the defendant in Crookes was the publisher of a link only to the allegedly defamatory material, whereas Google, in the present case, created the platform on which the allegedly defamatory reviews were posted and admits that it had the power to remove them.
[151] In Crookes, Abella J., speaking for the majority, states, at para. 16:
[16] To prove the publication element of defamation, a plaintiff must establish that the defendant has, by any act, conveyed defamatory meaning to a single third party who has received it (McNichol v.Grandy, 1931 99 (SCC), [1931] S.C.R. 696, at p. 699). Traditionally, the form the defendant’s act takes and the manner in which it assists in causing the defamatory content to reach the third party are irrelevant:
There are no limitations on the manner in which defamatory matter may be published. Any act which has the effect of transferring the defamatory information to a third person constitutes a publication.
(Stanley v. Shaw, 2006 BCCA 467, 231 B.C.A.C. 186, at para. 5, citing Raymond E. Brown, The Law of Defamation in Canada [2nd ed.], vol. 1, at No. 7.3.)
[Emphasis added]
[152] Abella J. later states, at para. 20:
[20] … Such “subordinate distributors” may escape liability by showing that they “have no actual knowledge of an alleged libel, are aware of no circumstances to put them on notice to suspect a libel, and committed no negligence in failing to find out about the libel”
[21] … there must be “knowing involvement in the process of publication of the relevant words”
[Citations omitted; emphasis added]
[153] In referring to U.S. jurisprudence exculpating a person who simply refers to, or provides a hyperlink to, defamatory material that has been published by another, Abella J. stated:
[28] These features — that a person who refers to other content generally does not participate in its creation or development — serve to insulate from liability those involved in Internet communications in the United States: see Communications Decency Act of 1996, 47 U.S.C. §230 (1996); see also Jack M. Balkin, “The Future of Free Expression in a Digital Age” (2009), 36 Pepp. L. Rev. 427, at pp. 433-34; Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir. 1997); Barrett v. Rosenthal, 146 P.3d 510 (Cal. 2006); Fair Housing Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157 (9th Cir. 2008).
[Emphasis added]
[154] In the present case, Google did participate in the creation or development of the third party content by creating the platform on which the authors of the reviews first posted them. Additionally, Google acknowledges, in para. 5 of Anthony Nichols’ affidavit, that it “uses business information to help surface relevant local search results across Google, such as in Google Maps and Google Search. The business details displayed come from a variety of different sources, including business owners, reports from users, and other sources.”
[155] Abella J. further distinguishes, at para. 26, between the initial purveyor of content and the publisher of a hyperlink. She states:
[26] A reference to other content is fundamentally different from other acts involved in publication. Referencing on its own does not involve exerting control over the content. Communicating something is very different from merely communicating that something exists or where it exists. The former involves dissemination of the content, and suggests control over both the content and whether the content will reach an audience at all, while the latter does not. Even where the goal of the person referring to a defamatory publication is to expand that publication’s audience, his or her participation is merely ancillary to that of the initial publisher: with or without the reference, the allegedly defamatory information has already been made available to the public by the initial publisher or publishers’ acts. These features of references distinguish them from acts in the publication process like creating or posting the defamatory publication, and from repetition.
[Emphasis added]
[156] Google relies on the minority decision (of the Chief Justice and Fish J.) in Crookes, as set forth at para. 48 of the judgment. Those justices’ reasons suggest that publication requires adoption or endorsement. Google argues that it did not possess the control over publication and asserts that Yunaland should be required to demonstrate such adoption or endorsement.
[157] Read in context, it is evident that the Chief Justice and Fish J., in requiring adoption or endorsement, were addressing the actions of one who provides a hyperlink to defamatory content, not the actions of the entity who creates the platform for the initial dissemination of defamatory content. The Chief Justice and Fish J. state:
[48] In our view, the combined text and hyperlink may amount to publication of defamatory material in the hyperlink in some circumstances. Publication of a defamatory statement via a hyperlink should be found if the text indicates adoption or endorsement of the content of the hyperlinked text.
[Emphasis added]
[158] Google acknowledges that it had the ability to intervene and stop publication but says that that differs from control of the content. I disagree. It is the ability to stop the publication that, in part, distinguishes the party who initially publishes the content from the party that merely provides a hyperlink to it. The latter party does not have the ability to stop the publication, which is a form of control; the former does.
[159] Google acknowledges that it lacks the capacity to determine the authenticity of account holders. From the evidence before me, it is not clear whether the algorithms that Google employs to “help surface relevant local search results” are affected by the proliferation of negative reviews by fictitious account holders. Such evidence would be relevant to a determination as to whether Google is more than a passive intermediary and incurs liability by failing to take reasonable steps to confirm the identity of account holders and prevent the proliferation of fictitious accounts.
[160] There is a genuine issue of law as to whether Google, which “helps surface local search results,” is properly characterized as a “passive intermediary,” and whether it is exempted by law from liability based on that characterization, particularly in relation to defamatory content of which it had knowledge and took no steps to examine or remove.
[161] In Fairfax Media Publications Pty Ltd. v. Dylan Voller; Australian News Channel Pty Ltd v. Dylan Voller [2021] HCA 27, the High Court of Australia dismissed appeals from a judgment of the Court of Appeal and the Supreme Court of New South Wales concerning whether, by posting content relating to news stories about Mr. Voller, the respondent, on their respective public Facebook pages, the appellants were liable for the publication of allegedly defamatory "comments" that were posted by third-party Facebook users in response to the content.
[162] In Fairfax, the appellants were media companies that published newspapers and operated television stations in New South Wales. Each appellant maintained a public Facebook page on which they posted content relating to news stories and provided hyperlinks to those stories on their website. After posting content relating to particular news stories referring to Mr. Voller, including posts concerning his incarceration in a juvenile justice detention centre in the Northern Territory, a number of third-party Facebook users responded with comments that were alleged to be defamatory of Mr. Voller. Mr. Voller brought proceedings against the appellants alleging that they were liable for defamation as the publishers of those comments.
[163] The principal issue in the Fairfax case was whether Mr. Voller had "established the publication element of the cause of action of defamation against the defendant[s] in respect of each of the Facebook comments by third-party users." The Court of Appeal concluded that the primary judge did not err in answering that question in the affirmative. The High Court by majority dismissed the appeals and found that the appellants were the publishers of the third-party Facebook user comments. A majority of the Court held that the liability of a person as a publisher depends upon whether that person, by facilitating and encouraging the relevant communication, "participated" in the communication of the defamatory matter to a third person. The majority rejected the appellants' argument that for a person to be a publisher, they must know of the relevant defamatory matter and intend to convey it. Each appellant, by creating a public Facebook page and posting content on that page, facilitated, encouraged, and thereby assisted the publication of comments from third-party Facebook users. The appellants were therefore publishers of the third-party comments.
[164] There is a genuine issue of law in the present case as to whether Google, by continuing to provide a platform for the defendants’ defamatory review after receiving notice of them, either in the notices served on Google Canada or on Google at its offices in the U.S., or in the Statement of Claim subsequently served on it, published the content of the reviews that continued to appear on Google’s platform until Google removed them after RSJ Ricchetti made his Order dated February 25, 2020.
[165] Google asserted at the hearing that there was no evidence that Yunaland had employed the tools available to them to combat the alleged defamatory material contained on Google’s review application platform. What Yunaland failed to do is not material to the issue of whether Google is liable for allowing its platform to be used to amplify the alleged defamation after receiving Yunaland’s complaint concerning it. Google’s argument comes perilously close to blaming the victim for not having used all available means of defending themselves against a violation of their integrity, which has been disapproved of in other contexts.
[166] A subsidiary issue arises as to whether Google owed a duty of reasonable care toward its users against defamation against them occurring on its platform. Google lists the duty of care categories and submits that they do not apply in the present case. I find that there is a genuine issue of law in this regard that cannot reasonably be resolved in this motion and should await the outcome of Yunaland’s motion to amend its Claim.
[167] If Yunaland proves that Google was notified of the defamatory reviews and did nothing to examine or remove them, until it received a court order requiring it to do so, it would be deemed a publisher and would become liable as such. Google is responsible for the defamation of Yunaland and their business if it knowingly failed to stop the use of its platform by users whose identity was not known or verified by Google.
[168] In Crookes, at para. 87, the Court stated:
If a defendant was made aware (or had reason to be aware) of defamatory information over which he or she had sufficient control, but decided to do nothing about it, this nonfeasance might amount to a deliberate act of approval, adoption, promotion, or ratification of the defamatory information (see, e.g., Frawley v. State of New South Wales, [2007] NSWSC 1379 (AustLII).
[169] Google does not dispute the fact that, even after its lawyer received the Statement of Claim, Google informed Yunaland that the impugned reviews could only be removed if Yunaland obtained a court order. In Weaver v. Corcoran, 2017 BCCA 160, the plaintiffs sued a newspaper, among many other things, about a number of reader posts on the newspaper’s website. The newspaper did not monitor the postings or know about them until they received a complaint from the plaintiff. The newspaper then removed the posts within 48 hours. The lower court stated, at para. 284:
Once the offensive comments were brought to the attention of the defendants, however, if immediate action is not taken to deal with these comments, the defendants would be considered publishers as at that date.
[170] The Court of Appeal allowed an appeal on other grounds and ordered a new trial. The Court of Appeal did not comment on the failure of the National Post or its principal, Mr. Fisher, to remove the posts when they received the plaintiff’s complaint. The Court of Appeal stated, in part:
iii) Publication
[86] Finally, the plaintiff must prove publication. It must be established that the defendant has, by any act, conveyed the defamatory meaning concerning the plaintiff to a third party, who has received it. Traditionally, any act which transferred defamatory information to a third person was considered publication. However, in the modern age this has been modified to exclude entirely passive acts, such as some forms of referencing or hyperlinking of defamatory material. Where the acts at issue merely transmit information in a content-neutral way, without expression, adoption or endorsement, they are generally not considered publication: Crookes at paras. 16, 21, 3.
…
Did the judge err in holding Mr. Fisher jointly liable for publication of the allegedly defamatory articles?
[102] The judge also held Mr. Fisher jointly liable for publication of the four articles, but did not explain the factual basis for her decision: paras. 179‒183. Although there was an admission in the pleadings that Mr. Fisher was the “Publisher” of The National Post, there is limited and competing recent authority on the extent to which holding that position should attract joint liability: see Graham, on the one hand, and Kent, on the other. Given the absence of a full factual context and in light of the fact that there must be a new trial, in my view it is undesirable to analyse this issue and I would decline to do so.
[171] In Carter v. B.C. Federation of Foster Parents Association, 2005 BCCA 398, 257 D.L.R. (4th) 133, the British Columbia Court of Appeal held, at para. 20:
If defamatory comments are available in cyberspace to harm the reputation of an individual, it seems appropriate that the individual ought to have a remedy. In the instant case, the offending comment remained available on the internet because the defendant respondent did not take effective steps to have the offensive material removed in a timely way.
[172] Google has failed to provide any support in law or in any of its terms of service that allows it knowingly to permit its platform to be used as a tool for defamation by one Google user against another. It is at least arguable that Google had no right, and no exemption under the law, to knowingly allow its platform to be used as a tool for defamation by a fake user against any business or individual.
ii. Can the issues be resolved by employing the powers in Rule 20.04(2.1) and (2.2)?
[173] For the reasons stated above, I find that the issues of fact and of facts and law cannot be resolved by employing the powers in Rule 20.04 to achieve a just result in this case. There are numerous disputed or unacknowledged facts, and a full discovery and trial are required to resolve them satisfactorily.
iii. Would granting summary judgment be a proportionate, more expeditious, and less expensive manner of achieving a just outcome than having the action proceed to trial?
[174] The issues of law in the present case are inextricably bound to the disputed issues of fact. It would not likely result in an early resolution of the action to determine those issues without considering them in the context of the findings of fact, referred to above, which the Court will be called upon to make. It would also not be likely to achieve a just outcome by determining the issues of law outside their factual matrix.
[175] Google submits that Yunaland can obtain the remedies it seeks from the other defendants. It notes that Yunaland has obtained default judgment against those defendants and an Order directing an assessment of damages. There is no evidence as to what those damages are or as to whether a judgment against the other defendants for them could be realized. It is therefore necessary, in the interests of justice, to permit Yunaland’s action against Google to proceed.
c. Should Yunaland be granted leave to amend its Statement of Claim and, if so, on what terms?
[176] Yunaland has not moved to amend its Statement of Claim but has requested leave to make a motion to do so.
[177] Rule 26.01 of the Rules of Civil Procedure provides that on a motion, at any stage of an action, the Court shall grant leave to amend a pleading on such terms as are just, unless prejudice would result that could not be compensated for by costs or an adjournment. In Spencer v. Tom and Jerry's Bistro, 2009 16583 (Ont. S.C.), at para. 24, Kelly J., on hearing a motion by the defendant for summary judgment dismissing the action based on a limitations defence, considered Rule 26.01 and granted the plaintiff leave to amend the Statement of Claim to plead discoverability.
[178] In Grant v. Grant (2001), 2001 27938 (ON CA), 56 O.R. (3d) 225 (C.A.), the Court of Appeal held that the motion judge had erred in granting a motion for summary judgment dismissing a defamation action on the basis of a variance between evidence and the Statement of Claim. The Court held that the motion judge should have granted the plaintiff leave to amend the Statement of Claim to conform with the evidence of words spoken by the defendant.
[179] In opposing Yunaland’s request for leave to bring a motion to amend, Google relies on section 4 of the Limitations Act, 2002, S.O. 2002, c. 24, Sched. B. It provides:
Unless this Act provides otherwise, a proceeding shall not be commenced in respect of a claim after the second anniversary of the day on which the claim was discovered.
[180] Under the Limitations Act, 2002, amendments of a Statement of Claim will be rejected if they seek to advance new causes of action after the two-year limitation period has elapsed. When a proposed amendment asserts material facts that are essential to support the claim being advanced, and that were not substantially pleaded in the original claim, the amendment will be deemed to raise a new cause of action: Ascent Inc. v. Fox 40 International Inc., [2009] O.J. No. 2964 (S.C.) at para. 3; Bank of Nova Scotia v. PCL Constructors Canada Inc., [2009] O.J. No. 4347 (S.C.) at paras 12-20.
[181] A “cause of action” is a “factual situation the existence of which entitles one person to obtain from the court a remedy against another person”: Ascent Inc., para. 3. The key is whether substantially all of the material facts giving rise to the “new cause of action” have previously been pleaded: Fitzpatrick Estate v. Medtronic Inc., 1996 8118 (ON SC), [1996] O.J. No. 2439 (Ont. Gen. Div.), at para. 22, or whether new facts are sought to be added that are relied upon to support a new cause of action: English Estate v. Tregal Holdings Ltd., [2008] O.J. No 4122 (S.C.), at para. 5.
[182] Yunaland has not moved to amend. It seeks only leave to bring such a motion. As such a motion is not before me, I am unable to determine whether Yunaland has pleaded all of the material facts giving rise to a new cause of action as it may seek to advance. I note, however, that Yunaland’s Statement of Claim currently alleges, in paragraph 82 and following, that Google listing is an internet group which Google carries and stores through an unknown source, that it provides uncontrolled access of “Google listing” to the public, that the authors of the defamatory content published that content on Google listing, provided and controlled by Google, and that any internet user can publish any type of comments about any business on Google listing. Such facts could potentially support alternative claims for relief.
[183] A new cause of action is not asserted if:
(a) the amendments simply plead an alternative claim for relief arising from the same facts previously pleaded, and no new facts are relied upon: Schryer (Litigation Guardian of) v. 1232215 Ontario Ltd., [2009] O.J. No. 3578 (S.C.), at paras. 29-31, 38; Fitzpatrick Estate, at para. 44; MacGregor v. Royal & SunAlliance Insurance Co. of Canada, [2009] O.J. No. 1564 (Ont. S.C.J.) at para. 46, aff'd 2010 ONSC 3558 (Ont. Div. Ct.); or
(b) asserts different legal conclusions drawn from the same set of facts: Fitzpatrick Estate, at para. 44; or
(c) provides particulars of an allegation already pleaded: British Columbia Ferry Corp. v. T & N plc, [1993] B.C.J. No. 1827 (B.C.S.C.) [In Chambers], at para. 26, or additional facts upon which the original right of action is based: Budget Rent A Car of Edmonton Ltd. v. University of Toronto, 1991 13069 (AB QB), [1991] A.J. No. 108 (Q.B.), citing Radigan v. Canadian Indemnity Co., 1953 354 (ON CA), [1953] O.W.N. 788 (Ont. C.A.).
[184] The proposed amendment, to survive the limitation period, must rely on facts which have been substantially pleaded in the initial Statement of Claim: Bank of Montreal v. Morris, [2013] O.J. No. 3090 (Ont. S.C.J.) at para. 46; Timbers Estate v. Bank of Nova Scotia, 2011 ONSC 3639 (Ont. S.C.J.) at paras. 10-14.
[185] I find that there would be no prejudice to Google to permit Yunaland to make a motion to amend its Statement of Claim. It will be for the judge hearing the motion, and considering the wording of the proposed amendment, to decide whether the motion ought to be granted. The motion judge will determine whether Yunaland’s proposed Amended Statement of Claim asserts a new cause of action, or simply makes an alternative claim based on the same factual matrix that it asserted in its original Statement of Claim.
[186] Google has not demonstrated any prejudice that would result from Yunaland being granted leave to bring such a motion.
CONCLUSION AND ORDER
[187] For the foregoing reasons, it is ordered that:
The defendant Google’s motion for summary judgment is dismissed.
The plaintiffs Collett Thorpe and Yunaland have leave to make a motion to amend their Statement of Claim.
Google shall pay to the plaintiffs their costs of this motion, on a partial indemnity scale, which I fix in the amount of $13,252.02.
Price J.
Released: December 19, 2022
COURT FILE NO.: CV-19-2254
DATE: 20221219
ONTARIO
SUPERIOR COURT OF JUSTICE
B E T W E E N:
COLLETT THORPE & YUNALAND INC.
Plaintiffs
– and –
AUDREY BOAKYE, OHIGIMENTON FRANCIS, GOOGLE INC., GOOGLE LLC & JOHN DOE
Defendants
REASONS FOR ORDER
Price J.
Released: December 19, 2022
[^1]: Although Judge Berzon criticized the earlier Ninth Circuit decision in Dyroff (p. 87a), she concluded that the panel was bound by that erroneous interpretation of section 230. p. 86a-92a. Judge Berzon therefore joined Judge Christen’s majority opinion, but called on the Ninth Circuit to grant rehearing en banc (that is, by the full court) to reconsider the issue. p. 91a.
[^2]: The plaintiffs in Dyroff petitioned for a rehearing by the full court, and urged the appeal court, on a rehearing, to adopt the narrower interpretation of section 230 advocated by Judges Berzon and Gould. The majority of the court of appeals voted to deny rehearing en banc: pps. 261a-262a. Judges Berzon and Gould dissented from the denial of a rehearing. p. 261a. Judge Gould filed a brief opinion stating that he dissented from the denial of rehearing for the reasons set out in his dissent from the panel decision. p. 262a.

