Artificial Intelligence & American Copyright Law: Analyzing the Copyright Office’s AI Report

Copyright Office’s AI Report: The Good, The Bad, and The Controversial

The Copyright Office just dropped Part 3 of its AI report, which aimed at addressing certain copyright law in regards to Artificial Intelligence. The thing that’s got everyone talking is the fact that the report was supposed to tackle infringement issues head on, but instead teased us by saying that answer will come up in “Part 4” that is expected to be released at a later date. Let’s dive into what was actually discussed.

Legal Theory: A Case by Case Basis

The report’s central thesis is a pretty straightforward legal theory. Basically, they recommend that there will be no blanket rule on whether training AI on copyrighted content constitutes infringement or fair use. Everything gets the case by case treatment, which is both realistic and frustrating depending on where you sit. That’s because most lawyers like clear bright line rules backed up by years of precedent, but when attempting to make legal frameworks regarding emerging technologies, the brightline approach is easier said than done.

The report acknowledges that scraping content for training data is different from generating outputs, and those are different from outputs that get used commercially. Each stage implicates different exclusive rights, and each deserves separate analysis. So in essence, what’s  actually useful here is the recognition that AI development involves multiple stages, each with its’ unique copyright implications.

This multi stage approach makes sense, but it also means more complexity for everyone involved. Tech companies can’t just assume that fair use covers everything they’re doing and content creators can’t assume it covers nothing. The devil is in the details.

Transformative Use Gets Complicated

The report reaffirms that various uses of copyrighted works in AI training are “likely to be transformative,” but then immediately complicates things by noting that transformative doesn’t automatically mean fair. The fairness analysis depends on what works were used, where they came from, what purpose they served, and what controls exist on outputs.

This nuanced approach is probably correct legally, but it’s also a nightmare for anyone trying to build AI systems at scale. You can’t just slap a “transformative use” label on everything and call it a day. The source of the material matters, and whether the content was pirated or legally obtained can factor into the analysis. So clearly purpose also matters since commercial use and research use will likely yield different results in the copyright realm. Control and mitigation matter in this context because developing the necessary guardrails is paramount to preventing direct copying or market substitution.

Nothing too revolutionary here, but the emphasis on these factors signals that the Copyright Office is taking a more sophisticated approach than some of the more simplistic takes we’ve seen from various opinions on this matter. This should be reassuring since a one size fits all approach at such an early stage of developing AI could stifle innovation. However if things are left to be too uncontrolled copyrighted works may face infringements to their copyright.

The Fourth Factor Controversy

Here’s where things get interesting and controversial. The report takes an expansive view of the fourth fair use factor: which is the effect on the potential market for the copyrighted work. That is because too many copyrighted works flooding the market brings fears of market dilution, lost licensing opportunities, and broader economic impacts.

The Office’s position is that the statute covers any “effect” on the potential market, which is broad interpretation. But that broad interpretation has a reason, they are worried about the “speed and scale” at which AI systems can generate content, creating what they see as a “serious risk of diluting markets” for similar works. Imagine an artist creates a new masterpiece only to get it copied by an AI model which makes the piece easily recreatble by anyone, diluting the value of the original masterpiece. These types of things are happening on the market today.

This gets particularly thorny when it comes to style. The report acknowledges that copyright doesn’t protect style per se, but then argues that AI models generating “material stylistically similar to works in their training data” could still cause market harm. That’s a fascinating tension, you can’t copyright a style but you might be able to claim market harm from AI systems that replicate it too effectively. It is going to be interesting to see how a court applies these rules in the coming future.

This interpretation could be a game-changer, and not necessarily in a good way for AI developers. If every stylistic similarity becomes a potential market harm argument, the fair use analysis becomes much more restrictive than many in the tech industry have been assuming.

The Guardrails

One of the more practical takeaways from the report is its emphasis on “guardrails” as a way to reduce infringement risk. The message is clear: if you’re building AI systems, you better have robust controls in place to prevent direct copying, attribution failures, and market substitution.

This is where the rubber meets the road for AI companies. Technical safeguards, content filtering, attribution systems, and output controls aren’t just up to the discretion of the engineers anymore they’re becoming essential elements of any defensible fair use argument.

The report doesn’t specify exactly what guardrails are sufficient, which leaves everyone guessing. But the implication is clear: the more you can show you’re taking steps to prevent harmful outputs, the stronger your fair use position becomes. So theoretically if a model has enough guardrails they may be able to mitigate their damages if the model happens to accidently output copyrighted works.

RAG Gets Attention

The report also dives into Retrieval Augmented Generation (RAG), which is significant because RAG systems work differently from traditional training approaches. Instead of baking copyrighted content into model weights, RAG systems retrieve and reference content dynamically.

This creates different copyright implications: potentially more like traditional quotation and citation than wholesale copying. But it also creates new challenges around attribution, licensing, and fair use analysis. The report doesn’t resolve these issues, but it signals that the Copyright Office is paying attention to the technical details that matter.

Licensing

The report endorses voluntary licensing and extended collective licensing as potential solutions, while rejecting compulsory licensing schemes or new legislation “for now.” This is probably the most politically palatable position, but it doesn’t solve the practical problems.

Voluntary licensing sounds great in theory, but the transaction costs are enormous when you’re dealing with millions of works from thousands of rights holders. Extended collective licensing might work for some use cases, but it requires coordination that doesn’t currently exist in most creative industries.

The “for now” qualifier is doing a lot of work here. It suggests that if voluntary solutions don’t emerge, more aggressive interventions might be on the table later.

The Real Stakes

What makes this report particularly significant isn’t just what it says, but what it signals about the broader policy direction. The Copyright Office is clearly trying to thread the needle between protecting creators and enabling innovation, but the emphasis on expansive market harm analysis tilts toward the protection side.

For AI companies, this report is a warning shot. The days of assuming that everything falls under fair use are over. The need for licensing, guardrails, and careful legal analysis is becoming unavoidable.

For content creators, it’s a mixed bag. The report takes their concerns seriously and provides some theoretical protection, but it doesn’t offer the clear-cut prohibitions that some have been seeking.

The real test will come in the courts, where these theoretical frameworks meet practical disputes. But this report will likely influence how those cases get decided, making it required reading for anyone in the AI space.

As we can see AI and copyright law is becoming only more and more complex. The simple answers that everyone wants don’t exist, and this report makes that abundantly clear. The question now is whether the industry can adapt to this new reality or whether we’re heading for a collision that nobody really wants.

Why’s America Sleeping? A Discussion Regarding The United Healthcare CEO’s Assassination.

“It takes violent shocks to change an entire nations psychology.”

– John F. Kennedy

This quote written in John F. Kennedy’s Magnum Opus ‘Why England Slept’ encapsulates the current collective psychology of the United States after the tragic assassination of Brian Thompson. Some people celebrated the CEO’s death, a symbol of the frustration many Americans have been feeling regarding the nation’s healthcare system. Critiques of the healthcare system are definitely warranted, and Luigi Manginoni’s tragic act has once again put the nations healthcare debate at the forefront a public discourse.  

President Kennedy’s quote is correct, often violent acts can change an entire nations collective psychology, there are plenty of examples in history that agree with that proposition. However, people are wrong assuming that the assassination will trigger meaningful change due to the fear healthcare insurance executives may feel after the assassination of Brian Thompson. People’s idealism can cloud the reality on how institutions operate in the real world. History has proven powerful players rarely relinquish control freely. The healthcare industry could hypothetically double down and refuse to budge, further entrenching an “us vs. them” mentality that pervades many contemporary national debates. Though, admittedly, the act could hypothetically result in meaningful change in the healthcare industry- but not in the way people celebrating the death would imagine. A good case study as to why that is the case would be the Ludlow Massacre.

On April 20, 1914, in Ludlow, Colorado, striking coal miners demanded better pay, safer working conditions, and the right to unionize ( more info on Ludlow here). The strikers were attacked by the Colorado National Guard and company-hired guards, killing the protestors and some of their family members. The Ludlow Massacre lead to the Colorado Coalfield War, where workers formed a militia and started attacking Colorado National Guardsmen and private law enforcement . The workers successfully attacked many of their oppositions positions and had a lower casualty count but when the dust settled the strikers’ demands were not met, the union did not obtain recognition and many striking workers were replaced. Further 408 strikers were arrested, 332 of them were indicted for murder. The institution decided to double down on the crackdowns resulting in none of the strikers work demands being met.

Though the workers themselves did not reach their goals, the tragedy of Ludlow spurred a greater national debate on workers rights in the United States. Slowly the grievances raised by the Ludlow massacre lead to the enactment of federal labor laws that we still use today. American society should turn this tragedy into a positive and reinvigorate the discussion and action that will lead to fundamental changes in the healthcare industry. The Ludlow Massacre forced the nation to confront workers’ rights, and similarly the tragic assassination of Brian Thompson could prompt similar discussions about the systemic failures in healthcare. However, history shows that institutional change is slow and often requires sustained public pressure. Hopefully, this time around the change will come sooner, if not there are indications that matters may get worse rather than better.  Analyzing the economic incentives causing this turmoil will illuminate the problem areas in the sector and hopefully lead to some practical solutions.

Economic Moral Hazards

The problem with the healthcare sector is that it produces bad economic incentives. 1 Often healthy economic incentives encourage behavior that benefits both individuals and society, they are aimed at promoting positive economic action while discouraging negative consequence such as waste or harm. For example good economic incentives would be efficiency standards for cars, they incentivize manufacturers to produce efficient automobiles by offering various government benefits. For example, car companies may get tax breaks,  recognition for meeting higher energy efficiency standards, or might get access to lucrative government contracts. This makes energy utilization effective, lowers bills for consumers, and helps reduce environmental impacts by using wasteful technology.

The healthcare industry seems to be running in the opposite direction regarding incentives. Large hospitals commonly increase prices for services and lab technology, knowing that insurers and government programs will foot the bill one way or the other. 2  A big reason hospitals can do this is due to lack of competition within the sector. 3 On average Americans have access to only a few healthcare providers, which incentives monopolistic practices such as price gouging. 4  These practices shift the financial burden onto patients, insurers, and taxpayers, exacerbating the system’s inefficiencies.  Insurance companies also contribute to producing bad economic incentives but in a different way.

Source: American Enterprise Institute

Health care Insurers also contribute to overall inflated healthcare prices. That’s because insurance companies have few incentives to negotiate for better rates or challenge the high prices set by hospitals.  They are well aware that they can pass those costs onto consumers in the form of higher premiums or deductibles in order to fulfill their fiduciary duty to their shareholders. 5 By passing those costs on to their consumers they ensure their shareholders are maximizing profits effectively fulfilling their duty. This leads to a disconnect between the price of healthcare and the actual cost to consumers leading to the  inflated cost of healthcare of the healthcare system.

Additionally, some companies during economic downturns might only focus on the volume of services provided, rather than the quality or necessity of those services. This may encourage doctors to prescribe unnecessary treatments overuse of healthcare and can result in unnecessary tests or procedures, which drive up the overall healthcare costs. If you are fully covered getting extensive tests is beneficial for your health but unnecessary care drives up the price for people who do not have adequate coverage. That is because higher utilization of healthcare services- necessary or not artificially inflates demand, which providers often use to justify price increases. Most insurance companies operate within fee-for-service payment systems, where providers are reimbursed based on the volume of services delivered rather than the value or outcomes. This further incentivizes unnecessary treatments, tests, and procedures, as healthcare providers have a financial interest in maximizing billable services

Further, the administrative complexity of health insurance also adds significant costs to the healthcare system. Insurers maintain vast bureaucracies to process claims, determine coverage, and manage provider networks, which requires substantial resources. Anecdotally, after spending some time in the insurance sector, a lot of the administrative tasks incentivize an incredible amount of waste.  These costs are ultimately passed on to consumers. For example, administrative expenses in the U.S. healthcare system account for nearly 8% of total spending, compared to 2-3% in countries with simpler, more centralized systems. 6

Action is Necessary

The tragic events surrounding Brian Thompson’s assassination have understandably stirred intense emotions and reignited a national conversation about the flaws in our healthcare system. While these tragedies can bring the issue to the forefront, history shows us that meaningful change doesn’t come from fleeting moments of outrage. The shockwaves from Thompson’s death can grab attention temporarily, but true change only happens when we confront the deeper economic incentives that drive the inefficiencies and inequalities in healthcare.

Reform, as we’ve seen in the past, is rarely quick or easy. It faces resistance from entrenched interests that benefit from the status quo. But the time to act is now. The monopolistic pricing, the disconnect between what healthcare actually costs and what patients pay, and the lack of meaningful negotiation from insurers- all of these must be tackled with urgency. It’s time to rethink the economic incentives behind the healthcare system and shift the focus toward transparency, competition, and patient-centered care. The current model is unsustainable, and the responsibility for change lies with all of us; policymakers, healthcare providers, insurers, and the public.

Let us use this tragedy not as a fleeting moment of anger but as a rallying point to demand systemic reform. By ensuring that economic incentives align with the well-being of patients and the long-term sustainability of the system, we can move toward a healthcare system that serves the needs of every American, not just the powerful few. Now is the time for thoughtful, deliberate action to reform the healthcare system in a way that reflects the values of justice, fairness, and efficiency for all.

“The time to repair the roof is when the sun is shining.”

-John F. Kennedy

Right now, I’m sad to say, it seems like we’re attempting to repair a roof in the middle of a tornado. Urgent action is needed.


Sources

  1. According to the Journal of the American Medical Association (JAMA), the U.S. healthcare system is plagued by administrative inefficiencies, price inflation, and overuse of medical services, which are driven by poorly aligned incentives among providers, insurers, and payers.

    Source: JAMA. “Waste in the US Health Care System: Estimated Costs and Potential for Savings.” (2019). ↩︎
  2. Research from the RAND Corporation indicates that hospitals charge private insurers an average of 247% of Medicare rates for the same services. This price disparity exists because private insurers lack the bargaining power to negotiate rates effectively, and hospitals rely on these inflated payments to subsidize their operations.

    Source: RAND Corporation. “Prices Paid to Hospitals by Private Health Plans Are High Relative to Medicare and Vary Widely.” (2020).. ↩︎
  3. Research shows that hospital consolidation reduces competition and leads to higher prices. A study by the National Bureau of Economic Research (NBER) found that hospital mergers result in price increases of 6% to 18%, depending on the level of market concentration.

    Source: NBER. “The Price Effects of Cross-Market Hospital Mergers.” (2018).
     The Health Care Cost Institute (HCCI) reports that the average price for hospital services is significantly higher in concentrated markets than in competitive ones.

    Source: HCCI. “Healthy Marketplace Index.” (2020). ↩︎
  4. The American Medical Association (AMA) found that in 2019, 90% of metropolitan areas in the U.S. were highly concentrated for hospital markets, meaning patients had limited choices among providers. This is also the case in more rural areas as well.

    Source: AMA. “Competition in Health Insurance: A Comprehensive Study of U.S. Markets.” (2019). ↩︎
  5. Premiums and deductibles for employer-sponsored health insurance have been steadily rising, with average family premiums increasing by 55% over the past decade. Insurers often attribute this to rising healthcare costs from hospitals and providers.

    Source: KFF. “2022 Employer Health Benefits Survey.” ↩︎
  6. The Study highlights the disproportionately high administrative costs in the U.S. healthcare system compared to other high-income nations with centralized systems, where administrative spending ranges between 2-3% of total healthcare expenditures             

       Source: Woolhandler, S., & Himmelstein, D. U. “Administrative Work Consumes One-Quarter of U.S. Physicians’ Working Hours and Lowers Their Career Satisfaction.” Health Affairs, 2014. ↩︎

Section 230: From Jordan Belfort to Gonzalez- The Law That Made The Modern Internet

On May 24, 1995, Jordan Belfort’s brokerage firm Stratton Oakmont successfully sued Prodigy Communications Corporation in a New York court for defamation. Little did anyone know Stratton’s win over Prodigy would be the catalyst that changed the internet forever. The so called Wolves of Wallstreet had unknowingly set a dangerous precedent that threated the tech industry.

Stratton Oakmont v. Prodigy Services

Prodigy was an online internet service which more or less mirrored modern day social media sites, it serviced over 2 million people at its peak. Users were able to utilize a broad range of services such as getting access to news, weather updates, shopping, and bulletin boards. One of Prodigy’s notorious bulletin boards was called Money Talk, a popular forum where members would discuss economics, finance, and stocks- similar to Reddit’s  Wallstreet Bets forum. Prodigy also contracted with independent moderators to vet and participate in the board discussions, similar to editors in a Newspaper but who engaged with their audience a lot more.

In 1994, two posts would subject Prodigy to legal liability. An unidentified user posted on the Money Talk bulletin on the dates of October 23rd & 25th, claiming that Stratton Oakmont was committing SEC violations and engaging in fraud in regards to an IPO they were involved in (Solomon-Page’s IPO) . The poster claimed:

  • the Solomon-Page IPO  was a “major criminal fraud” and “100% criminal fraud”
  • Daniel Porush was a “soon to be proven criminal”
  • Stratton was a “cult of brokers who either lie for a living or get fired.”

Ironically many of these claims would turn out to be true, however at the time they were unsubstantiated since there was no concrete evidence to back them up. After Stratton was made aware of the posts, the company and Daniel Porush (aka Jonah Hill’s character in the movie) commenced legal action against Prodigy for defamation due to the libelous statements made on Money Talks.

In the United States defamation claims are not plaintiff friendly due to the strong protections the 1st Amendment offers. In general in order to succeed in a defamation, a plaintiff must prove four elements:

1) a false statement purporting to be fact;

2) publication or communication of that statement to a third person;

3) fault amounting to at least negligence; and

4) damages, or some harm caused to the reputation of the person or entity who is the subject of the statement.

In the suit brought by Stratton against Prodigy the court focused on element 2 and 3. Namely whether or not Prodigy was a publisher & if the moderator’s acts or omissions while editing the Money Talk bulletin board amounted to at least negligence. The court ruled in favor for Stratton Oakmont.

  The court reasoned that an operator of an online message board is considered a publisher for purposes of defamation liability. Specifically, if the online operator holds itself out as controlling the content of the message board and implements such control through guidelines and screening programs. An entity that repeats or otherwise republishes a libel post is subject to liability as if he had originally published it. But a party disseminating others’ content only faces libel liability if the party qualifies as a publisher rather than a distributor. If a party merely “distributes” others’ content, then the party is a distributor and is not subject to liability. The court used the phrase “passive conduits” to describe distributors. A passive conduit doesn’t face liability for libel absent a finding that the distributor knew or had reason to know that distributed content contained defamatory statements. Basically, if you had a content moderation system for user generated content your website was likely liable for defamation. Defenses such as the impracticability of moderating millions of user generated posts had no merit and would still subject the website to defamation claims. This ruling would shake the tech and internet industry, threatening to stunt and undo years of innovation.

To avoid a barrage of lawsuits the tech industry successfully lobbied Congress to act after the Stratton ruling. In 1996 Congress passed the Communications Decency Act which dealt with various internet related issues. The one most pertinent to us is Section 230(c).

Jeff Kosseff one of the leading scholars on Section 230 describes it as “the twenty-six words that created the internet.” The 26 words are:

 “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

These words provide immunity to online platforms from being held liable for user-generated content. Basically, it means that online platforms such as social media sites, forums, and search engines cannot be sued or prosecuted for what a user posts on their platforms. Even if the posts themselves are defamatory, false, or harmful .

This immunity has been vital in enabling the growth of the internet and the rise of social media platforms. It has allowed these platforms to provide a space for free expression and to facilitate the exchange of information and ideas without fear of legal consequences, unlike Prodigy who was a victim of pre Section 230 protection. This has also allowed smaller and newer online platforms to compete with established ones without having to worry about legal liabilities. Without Section 230, the internet me and you know and love would not exist- I likely would not be able to publish my articles without exposing myself to liability.

However, recently there have been attacks and concerns over the immunity Section 230 provides. Specifically that online platforms are basically not responsible  for any harmful content on their platforms, such as hate speech, harassment, and misinformation. Basically, people not in favor of the immunity argue that Section 230 has created an environment in which online hate speech and harassment can thrive. One recent case that puts forth such an argument Gonzalez v Google, has made it all the way to Supreme Court.

Gonzalez v. Google & It’s Implications

Gonzalez alleges ISIS generally used YouTube (owned by Google) to recruit members into its terror cells and “communicate its (ISIS’) desired messages.” which lead to the horrific events that occurred in Paris in 2015. Nohemi Gonzalez, a US citizen was unfortunately killed during the ISIS terror attacks that gripped the world in 2015. ISIS would later claim full responsibility for the attacks that lead to the untimely passing of Gonzalez.

Gonzalez argues that since YouTube videos helped fuel “the rise of ISIS” by knowingly recommending ISIS videos to its users, they are directly responsible for causing the Paris attack. They back up their argument that Google knew of such activity by claiming “[d]espite extensive media coverage, complaints, legal warnings, congressional hearings, and other attention for providing online social media platform and communications services to ISIS, prior to the Paris attacks Google continued to provide those resources and services to ISIS and its affiliates, refusing to actively identify ISIS YouTube accounts and only reviewing accounts reported by other YouTube users.” Their argument suggests that Google’s algorithms fall out of the scope of Section 230 and therefore subject them to liability.

Google contends that Section 230 fully immunizes them from such a suit based on judicial precedent and congressional intent, that their terms of services directly prohibit content that promotes terrorism, and they actively blocked such content when it was published by hiring Middle Eastern content moderators that worked 24/7 to flag terroristic content. Before the case arrived to the Supreme Court, all lower courts found in favor for Google.

Jess Miers a prominent Section 230 scholar mentions that this case “tees up a contentious question for the Supreme Court: whether Section 230 — a law that empowers websites to host, display and moderate user content — covers algorithmic curation.”. She points out that a vast majority of websites use non neutral algorithms, and that if the Supreme Court were to side in favor of Gonzalez it would open the flood gates of litigation against online services providers that rely on algorithms to function. Not only that but this ruling could incentivize states to curate the internet to achieve their ideological means, such as punishing websites for cracking down on misinformation or providing information for abortion services. This would give states too much power, and could lead to arbitrary curation of the information you see on the internet. A significant blow to consumers and arguably the facilitation of the 1st Amendment for Americans.

Only time will tell what happens with Section 230 as the court is expected to makes in ruling this summer. Hopefully, they make the right decision.

Sources:

STRATTON OAKMONT, INC. and Daniel Porush, Plaintiff(s),
v. PRODIGY SERVICES COMPANY, a Partnership of Joint Venture with IBM Corporation and Sears-Roebuck & Company, “John Doe” and “Mary Doe”, Defendant(s). Supreme Court, Nassau County, New York, Trial IAS Part 34.

Jeff Kosef: https://www.propublica.org/article/nsu-section-230

Jess Miers: High Court Should Protect Section 230 In Google Case https://www.law360.com/articles/1567399


Briefs of both parties in Supreme Court: REYNALDO GONZALEZ, et al., v. GOOGLE LLC,

The Development of Cyber Law: Past and Future

The Development of Cyber Law: Past and Future

Cyber law is often seen as a developing field in terms of international law. And rightly so, considering that the internet is a relatively new development in human history. However, that’s not to say that there isn’t development within the field of cyber law. The field encompasses a broad range of topics such as Intellectual property, data privacy, censorship etc, and some laws regarding cyber law are clearly defined. But despite that, this paper will narrow the scope in terms of where cyber law is the least developed, arguably that is within the context of data privacy and unauthorized access. These laws are often the least developed because they used to be enforced by legislation that covered traditional communication networks, such as mail and phone networks. But since data and unauthorized access is  the least developed that means they are often the most abused by agents committing cybercrimes. Feasibly, this is mainly due to the fact that these sectors lack enforcement and detection methods. However, in regions where cyber law is most enforceable have decided to back enforcement and detection mechanisms that allow their  cyber law to be practically enforced. But in order to understand how that’s possible a bit of background information on cybercrime and cyber law is necessary.

 

Background

Cybercrime has existed since the 1970’s in the form of network attacks on phone companies. Hackers would infiltrate telephone networks enabling them to create connection, reroute calls, and use payphones for free (The History of Phone Phreaking).  This early era of hacking was a problem due to the lack of legislation that could act as a guide for law enforcement to accurately charge criminals. Arguably, it was the lack of legislation on hacking which allowed computer hackers to hone their techniques of network infiltration and data theft. The reason why that’s so can be observed in the late 1980s, where the intensity of large system attacks became more abundant and destructive. One of these major system hacks was perpetuated by a group known as 414. It may come as a surprise that this group wasn’t a group of hardened criminals, but rather 6 teenagers from Milwaukee, Wisconsin. These teenagers managed to hack high profile systems ranging from a nuclear power to banks (Stor). The intentions of the hackers didn’t seem maligned, since they didn’t steal any info from these systems. But the 414 hacks weren’t harmless either since they cost a research company $1,500 after the hackers deleted some billing records. But other major hacks weren’t as harmless. The notorious Morris worm is a prime example of the harmful effects of unwarranted network infiltration. The reason that’s the case is that it the Morris Worm was one of the first hacks that was distributed to the public internet (Sack). The worm was created by Robert Morris a Graduate student at Cornell in order to showcase the security problems of the internet, the worm would go on to cause in-between $100,000- $10,000,000 in damage by infiltrating networks and causing them to become non-operational (Newman). Clifford Stoll, one of the people responsible for purging the worm, described the Morris worm as such:

“I surveyed the network, and found that two thousand computers were infected within fifteen hours. These machines were dead in the water—useless until disinfected. And removing the virus often took two days.” (Sack)

Additionally, one of Stoll’s colleagues says that 6,000 computers were infected, which may not seem like much but only 60,000 computers were connected  to the internet at that time. Meaning 10% of the internet was infected (Sack). Such attacks would force legislators to address the problems of cybercrime, specifically network infiltration. One of the first pieces of comprehensive cyber law to address these issues was the Computer Fraud and Abuse Act enacted by the United States congress.

Computer Fraud and Abuse Act

After a lieu of cyber-attacks plagued the world, the U.S Congress decided to specifically address cybercrime. Prior to the Computer Fraud and Abuse Act (CFAA) cybercrime was prosecuted under mail and fraud, however that proved uncomprehensive. The CFAA’s framework defined what cybercrime entailed. One of the pioneering things the legislation did was define what kinds of computers were off limits to data infiltration. According United States code, In Title 18, Section 1030 of CFAA a computer is unlawful to hack if that computer is:

“(A) exclusively for the use of a financial institution or the United States Government, or, in the case of a computer not exclusively for such use, used by or for a financial institution or the United States Government and the conduct constituting the offense affects that use by or for the financial institution or the Government; or

(B) which is used in interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States.” -18 U.S. Code § 1030 – Fraud and related activity in connection with computers (Cornell)

Basically, stealing information from a computer that effects the United States via interstate or foreign commerce/communication is forbidden. These specifications were enough to convict various individuals who committed cybercrimes in the late 1980s. Interestingly enough, Robert Morris was the first person to be convicted for violating the Computer Fraud and Abuse Act. Specifically, because he intended to infiltrate several computers without authorization which negatively affected economics and communication within the US (Newman). Arguably, the enforcement power of the CPAA may have been the essential precedent needed for the international community in terms of creating sufficient cyber laws.

Budapest Convention on Cybercrime

In November 2001, the first international convention on cybercrime took place. The scope of the Budapest Convention on Cyber Crime (BCC), unlike the CPA, had a wide range because it addressed: IP law, fraud, and child pornography. However, the convention also found it important to address what unlawful access to a computer constituted, defining it as:

“..A Party may require that the offence be committed by infringing security measures, with the intent of obtaining computer data or other dishonest intent, or in relation to a computer system that is connected to another computer system.” -Article 2 of BCC (Council Of Europe)

Here, arguably, the BCC is more specific then the CPAA in terms of defining unlawful access. The reason being is that it ignores the economic/ social effects and directly addresses what unlawful access is which would be:

“..the intent of obtaining computer data or other dishonest intent…to a computer system that is connected to another computer system”.

The BCC ignores the economic/ social effects because some hacks may not influence the economy or society, by keeping the CPAA definition, the scope of data infiltration would’ve been limited to social and economic consequences on specific nations.  Furthermore, the BCC’s unlawful access clause has been a topic of discussion for legal entities and private corporations.

Modern Developments

     EU v Facebook

The precedent set by the BCC on data infiltration remains relevant in contemporary times. Especially when considering the role private corporations have in data security.  This is evident when taking Facebook’s recent big data breach into consideration. In order to understand why Facebook could potentially be an important catalyst to cybercrime we need to understand Facebook’s role on the internet. Simply put, Facebook is a social media website which connects private users via a public platform, but despite it being a public platform users’ can share data privately.  However, controversy emerged after users’ data was allegedly being misdirected and sold to third parties without user’s complete consent(Guzenko). This is a clear violation of the BCC’s definition of unlawful system infiltration. The reason that could be so is that since Facebook is in charge of maintaining data security for their users, that would mean Facebook selling data without user’s consent would constitute unlawful data infiltration according to article 2 of the BCC. Additionally, Facebook was hacked by a third party and millions of users’ data was exposed without user consent. After this particular hack the European Union decided to step in and reprimand Facebook via a 1 billion euro fine (Schnechner). The EU argues Facebook didn’t do enough to protect users’ data from infiltration, but such accusations do little for cyberlaw. The reason for that is that the EU could always argue that an entity “didn’t do enough” to protect user data, and supplement that reasoning with a fine. Rather, since cyberlaw is a relatively new field, legal entities ought to cooperate with influential players within the cyber world in order to create viable solutions to cybercrimes. The EU accusations assume that Facebook is the entity which allows 3rd parties to access user’s data, but actually it’s the users themselves who allow it. Restricting one’s data is an option if a user changes their privacy settings. Additionally, It’s unrealistic to expect Facebook to curtail the behavior of its one billion users, some users may be more susceptible to data hackers due to their ignorance on data security. The reasoning the EU expels on to Facebook would be similar to this: If an EU citizen were to steal property from another citizen then the EU itself would be held accountable for allowing that to occur, and thus deserves to be fined. This is so because Facebook is similar to the EU, in that it’s a microcosm of different sets of people behaving in ways they can’t totally predict. So instead of focusing on how specific internet services operate, focus should be shifted to network self-enforcement. A prime example of that shift of focus would be China’s comprehensive self-regulating internet network.

China’s Internet Network

Modern cybercrime has evolved in terms of the magnitude of attacks. However, in the 21st century cyber attacks, particularly on infrastructure, are not only possible but prevalent (Schmitt). An example of such an attack is explained by Tomas Ball, a contributor to the Computer Business Review

in December 2015 a massive power outage hit the Ukraine, and it was found to be the result of a supervisory control and data acquisition (SCADA) cyber-attack. This instance left around 230,000 people in the West of the country without power for hours…chaos was sewn using spear phishing emails, a low tech approach to launch such an attack; this trend is relevant today, with phishing still being used against critical infrastructure.” (Ball).

That’s only one case, but hacks on critical infrastructure can range from manipulating data which changes the chemical composition of drugs being manufactured, or to infiltrating a dam and redirecting electricity (Schmitt). Countries have implemented measures to mitigate and respond to this risk on critical infrastructure. A notable response to this problem has been from the Chinese government. China has been able to mitigate the risk of data infiltration by regulating its domestic network via legislation and technology. This may seem like a normal approach, but China is unique in the sense that it’s network is reinforced by technology which actively implements its cyber legislation. It’s difficult to understand the full scope of how China is able to do this (since the government isn’t transparent on how the network fully operates) but there are things that are clearly observable in terms of how data transfer is controlled. Firstly, data transfer in China is purposefully slowed. This is important because it allows the Chinese network to detect potential disturbances before they fully manifest, so infiltration could be detected much easier (Chew). Secondly, as mentioned before, the system self regulates. The Chinese government implements its own filters by creating comprehensive state tech that enforces Chinese law. However, China also puts heavy pressure on Chinese firms to self-regulate content, meaning that a firm that deals within the cyber world must follow the cyber laws or face a shutdown of business or a hefty fine. This is similar to how the EU went about dealing with Facebook, the only key difference being that the EU lacks its own data enforcement technology (Chew).   And lastly, China’s network will respond to infringements of their cyber law by quickly “poisoning” unlawful data connections. An example of this would be if a hacker wanted to infiltrate some data network the connection would immediately be detected as a “poisonous connection” and thereby the hacker would be cut off from that connection (Chew). Arguably this is what sets China’s cyber network apart from everyone else, the ability to actively regulate data under their legal framework. Though it’s worthy to note that China’s network indeed does well in terms of enforcing its cyber law by regulating data, but that doesn’t mean the network is exempt from criticism.

China’s cyber network is rather effective in terms of enforcing its cyber law, however it has been used to disenfranchise its own civilian population. Qiang Xiao an internet researcher at UC Berkeley describes China’s internet network as such

…it has consistently and tirelessly worked to improve and expand its ability to control online speech and to silence voices that are considered too provocative or challenging to the status quo.”(Xiao). But such things, unfortunately, should be expected. After all it makes sense for illiberal governments to manifest illiberal computer networks. But such realities shouldn’t deter liberal government from trying to conceptualize and develop internet networks which enforce cyber law. This is a lot easier said than done because it’s easier to enforce authoritarian cyber law than more liberal law (Schmitt).”

Mainly, due to the fact that the amount of computing power needed to create a liberal network would be immense. In conjunction with it being so massive, economic factors come into play when determining how effective cyber law is enforced in a region, leaving poorer liberal nations behind. Despite that, developments in quantum computing can allow for massive amounts of data to be processed and it’s getting cheaper as it develops (IBM). Not only that but quantum computing opens up the door for the network to self-learn, enabling better self-enforcement (IBM). Such features may intrigue governments who want to enforce a Common Law internet network since it can self-learn off of old legal precedents. So in the future enforceable cyber for liberal governments is definitely a possibility, and perhaps a necessity if strong cyber law is to be properly enforced.

In conclusion, since there are problems with enforcement and efficiency of cyber law, internet networks which actively enforce cyber law using the internet network itself are necessary to achieve practical cyber data enforcement. Different legal jurisdictions would naturally have different laws concerning their respective internet networks, therefore various network legal structures would need to exist to facilitate their cyber laws. Developments in quantum computing may pave the way for networks to regulate themselves.

Works Cited

“18 U.S. Code § 1030 – Fraud and Related Activity in Connection with Computers.” LII / Legal Information Institute, Legal Information Institute, http://www.law.cornell.edu/uscode/text/18/1030.

Ball, Tom. “Top 5 Critical Infrastructure Cyber Attacks.” Computer Business Review, Computer Business Review, 18 Jan. 2018, http://www.cbronline.com/cybersecurity/top-5-infrastructure-hacks/.

“Budapest Convention on Cybercrime.” Council of Europe, Council of Europe, http://www.coe.int/en/web/conventions/full-list/-/conventions/rms/0900001680081561.

Chew, Wei Chun. “How It Works: Great Firewall of China – Wei Chun Chew – Medium.” Medium.com, Medium, 1 May 2018, medium.com/@chewweichun/how-it-works-great-firewall-of-china-c0ef16454475.

Guzenko, Ivan. “The Third-Party Data Crisis: How the Facebook Data Breach Affects the Ad Tech.” MarTechSeries, 5 July 2018, martechseries.com/mts-insights/guest-authors/the-third-party-data-crisis-how-the-facebook-data-breach-affects-the-ad-tech/.

“The History Of Phone Phreaking.” The History of Phone Phreaking — FAQ, http://www.historyofphonephreaking.org/faq.php.

Newman, Jon O. “UNITED STATES of America, Appellee, v. Robert Tappan MORRIS, Defendant–Appellant.” Stanford Law, stanford.edu/~jmayer/law696/week1/Unites%20States%20v.%20Morris.pdf.

Sack, Harald. “The Story of the Morris Worm – First Malware Hits the Internet.” SciHi Blog, 3 Nov. 2018, scihi.org/internet-morris-worm/.

Schechner, Sam. “Facebook Faces Potential $1.63 Billion Fine in Europe Over Data Breach.” The Wall Street Journal, Dow Jones & Company, 30 Sept. 2018, http://www.wsj.com/articles/facebook-faces-potential-1-63-billion-fine-in-europe-over-data-breach-1538330906.

Schmitt , Michael N. “Cyberspace and International Law: The Penumbral Mist of Uncertainty.” Harvard Law Review, harvardlawreview.org/2013/04/cyberspace-and-international-law-the-penumbral-mist-of-uncertainty/.

Stor, Will. “The Kid Hackers Who Starred in a Real-Life WarGames.” The Telegraph, Telegraph Media Group, 16 Sept. 2015, http://www.telegraph.co.uk/film/the-414s/hackers-wargames-true-story/.

“What Is Quantum Computing?” What Is Quantum Computing? , IBM, http://www.research.ibm.com/ibm-q/learn/what-is-quantum-computing/.

Xiao, Qiang. “Recent Mechanisms of State Control over the Chinese Internet – Xiao Qiang.” China Digital Times CDT, chinadigitaltimes.net/2007/07/recent-mechanisms-of-state-control-over-the-chinese-internet-xiao-qiang/.