A Florida federal court recently dismissed a defamation lawsuit brought against social media giant Twitter in a case related to documents allegedly obtained from President Biden’s son Hunter’s laptop.1 The plaintiff, John Paul Mac Isaac, garnered national media attention as the former owner of a Delaware computer repair shop who allegedly obtained Hunter Biden’s laptop. He sued Twitter and a Florida company (Madbits LLC), claiming that Twitter allegedly defamed him through a series of public tweets.
Isaac argued that Twitter’s decision to lock The New York Post‘s account while Post staffers “attempted to post and disseminate its exposé [about the purportedly lurid contents of the laptop] on the social media platform” was akin to calling him a hacker because Twitter, in tweets about that decision, cited its rationale for the time-limited ban as a violation of Twitter’s rules against “distribution of hacked material.”
The New York Post article at issue referred to documents allegedly obtained from Hunter Biden’s laptop. The article indicated that these documents originated from an owner of a computer repair shop in Delaware, but never identified that owner. Neither did any tweets by Twitter about the article. Twitter’s outside legal team made use of that fact to argue that the tweets did not meet any of the elements required of a defamation claim. Twitter’s argument was simple: the tweets did not concern or identify Isaac or his business by name or implication; therefore, they could not be defamatory.
In a sixteen-page opinion that dismissed Isaac’s complaint with prejudice and awarded Twitter its attorney’s fees and costs under Florida’s anti-SLAPP (Strategic Lawsuits Against Public Participation) statute, Judge Beth Bloom agreed with Twitter. In dismissing the claim, the court focused on the fact that the only persons identified in Twitter’s explanations of its decision to lock the account were The New York Post, Hunter Biden, a “Ukrainian biz man” and “dad” – “not Isaac, his business, or any other descriptive information that made Isaac’s identity readily ascertainable.”
Notably, the court referenced several scenarios in which a social media company could potentially be liable for such posts. For example, had the tweet identified the “Mac Shop,” “a Delaware repair shop” or even included a photo of the repair authorization form, the court suggested, a plausible claim might have arisen. However, in this instance, the court refused to impose liability against Twitter where, in the court’s view, Twitter was “meticulous enough” to preserve Isaac’s anonymity.
The court’s decision is undoubtedly a victory for tech/social media companies currently facing increasing scrutiny from lawmakers and the public over their publishing activities and their actions to moderate content. Many lawmakers and some candidates for public office have in recent months called for Section 230 of the Communications Decency Act – an act that largely insulates such companies from liability for hosting or “re-publishing” users’ postings – to be modified or repealed. Section 230 implications arose early in Isaac’s lawsuit against Twitter, when the court sua sponte issued an order to show cause requiring Isaac to explain why Section 230 of the Communications Decency Act did not bar his case.
Isaac maintained that Section 230 was inapplicable because his claims related directly to the statement published by Twitter. He also asserted that Twitter could not meet the Communications Decency Act’s definition of what constitutes being a provider of an Interactive Computer Service (ICS), because Twitter was not serving as the provider of an ICS when it published the tweet. Instead, he argued, Twitter acted as a content provider when it created and developed its hacked materials policy, and the tweets it published allegedly defaming Isaac had come from its own staff.
In response to Isaac’s arguments, Twitter maintained that it was acting as a mere publisher to monitor content and enforce its User Agreement. While Twitter agreed there were certain circumstances in which an ICS can become a content provider – if, for example, a social media company or other provider were to edit a user’s post to change its meaning2 –Twitter maintained that no such circumstance existed here.
After ordering both parties to weigh in on the applicability of Section 230, the Court appears to have stayed silent on this issue, at least for now. The case thus leaves at least some question as to when a social media company can benefit from the liability protections afforded by Section 230.
Social media giants such as Twitter and Facebook are proponents of Section 230, often invoking it to expeditiously dismiss lawsuits on the basis that they have the right to make decisions on content moderation. Opponents contend that these companies are using Section 230 to censor free speech (a contention that appears to ignore meaningful distinctions between what private companies and the government can do with regard to limiting speech). Meanwhile, lawmakers continue to attempt to chip away at this safeguard by introducing legislation to create exceptions to its applicability. For example, on July 22, 2021, a bill titled the Health Misinformation Act3 was introduced in the U.S. Congress with the evident intention of exposing social media companies to civil liability for inaccurate statements published about health information on their websites, including information regarding COVID-19 vaccines. This bill is just one of many aimed at reforming Section 230, and should be monitored closely by social media companies in the coming months.
 Stephanie Koutsodendris, an attorney who recently joined our firm, was part of the legal team that successfully represented Twitter in this lawsuit.
See Generally Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F. 3d 1157, 1174 (9th Cir. 2008).
S. 2448 – 117th Congress: Health Misinformation Act of 2021. www. GovTrack.us.2021. October 3, 2021. Https://www.govtrack.us/congress/bills/117/s2448.