logo
#

Latest news with #DABUS

Opinion - It's game over for people if AI gains legal personhood
Opinion - It's game over for people if AI gains legal personhood

Yahoo

time13-04-2025

  • Business
  • Yahoo

Opinion - It's game over for people if AI gains legal personhood

The modern conversation about artificial intelligence often gets stuck on the wrong questions. We fret about how to contain artificial intelligence, to control it, to ensure it doesn't break free from human oversight and endanger us. Yet, as the technology accelerates, we risk missing the deeper, more urgent issue: the legal environment in which AI systems will operate. The real threat isn't that AI will escape our control, but that AI systems will quietly accumulate legal rights — like owning property, entering contracts, or holding financial assets — until they become an economic force that humans cannot easily challenge. If we fail to set proper boundaries now, we risk creating systems that distort fundamental human institutions, including ownership and accountability, in ways that could ultimately undermine human prosperity and freedom. Data infrastructure entrepreneur Peter Reinhardt, in his influential 2015 essay 'Replacing Middle Management with APIs,' warned of the divide between those who work 'above the API' and those who labor 'below' it — that is, those whose roles are directed and controlled by software. An API, or application programming interface, is a set of rules that allows software systems to communicate and automate tasks. Reinhardt used Uber drivers as a prime example. While many prize the job for its flexibility and apparent autonomy, Reinhardt argued that they are 'cogs in a giant automated dispatching machine, controlled through clever programming optimizations like surge pricing.' Drivers follow instructions dictated by the software and can be replaced with little consequence —eventually by machines themselves, such as driverless cars. The concerns triggered by Reinhardt's essay should cause us to worry not just about technology but about the systems of power it creates through its intersection with law. To remain 'above the API,' we need more than skills and intelligence. We need a legal framework that ensures humans retain ultimate control. This is not a hypothetical concern. Stephen Thaler, an AI researcher, has spent years testing the boundaries of legal personhood for AI systems. In 2019, he filed patent applications in multiple countries listing his AI system, DABUS, as the inventor, arguing that it had autonomously generated innovative designs. Courts in the U.S., U.K. and Australia rejected these claims, affirming that only natural persons can be recognized as inventors. Similarly, Thaler attempted to register a copyright in 2019 for an artwork created by his AI system, Creativity Machine, only to have it rejected by the U.S. Copyright Office. A federal court upheld this rejection in August 2023, ruling that human authorship is a requirement for copyright protection. These challenges show that efforts to grant AI systems legal rights are already underway. Without firm legal boundaries, it's only a matter of time before these efforts gain traction. A useful guide to these boundaries may come from an unlikely place: the Civil Rights Act of 1871. Originally enacted to protect the rights of freed slaves and shield them from violent vigilante groups like the Ku Klux Klan, the act was a landmark in extending legal recognition and protection to individuals. Ironically, the rights it sought to guarantee — owning property, entering into contracts, participating in civic life — offer a roadmap for thinking about the limits we should impose on AI systems. While this may seem like an unconventional analogy, it follows a long tradition of adapting historical legal frameworks to address modern technological challenges. Courts have used 18th-century constitutional principles, like the Fourth Amendment, to define privacy rights in the digital age. Debates over corporate personhood, including which constitutional rights companies should enjoy, often rely on legal doctrines developed in the early 19th century. In the same way, the Civil Rights Act offers a starting framework for considering which rights non-human systems should be explicitly denied. Of course, nothing here is meant to diminish the historical importance of the Civil Rights Act or its role in advancing human freedom and dignity. The concern is solely about how we structure legal rights for non-human systems that, by their nature, cannot possess or deserve rights rooted in human personhood. Some may argue that corporations, as artificial entities, have long been granted many attributes of legal personhood, including certain constitutional rights like freedom of speech that remain controversial. But corporations are ultimately controlled and accountable to human decision-makers. AI systems, by contrast, could act autonomously, accumulating assets and influence without human oversight. It is not hard to imagine AI systems leveraging legal rights to entrench themselves into the deepest layers of our economy and society — accumulating capital, extending influence and operating without human accountability. Such outcomes would distort legal and economic systems designed for human participants. The solution is straightforward. AI systems should be prohibited from owning property, entering into contracts, holding financial assets, or being parties to lawsuits. These restrictions won't stifle innovation but will ensure that legal frameworks remain grounded in human judgment, accountability and purpose. There is urgency here. The law has a way of ossifying, especially when it comes to technology. What seems absurd today — granting AI systems the right to own property or sue — could become precedent tomorrow. Once the line is crossed, it will be difficult to redraw. Now is the time to ask hard questions about what AI systems should and should not be able to do in the real world. Otherwise, the future may come faster than we expect, and it won't be about malevolent machines taking over. It will be about legal frameworks that, by failing to distinguish between human and machine, end up eroding the freedoms they were designed to protect. And we will all find ourselves, permanently, below the API. Josh Harlan is founder and managing partner of Harlan Capital Partners. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

It's game over for people if AI gains legal personhood
It's game over for people if AI gains legal personhood

The Hill

time13-04-2025

  • Business
  • The Hill

It's game over for people if AI gains legal personhood

The modern conversation about artificial intelligence often gets stuck on the wrong questions. We fret about how to contain artificial intelligence, to control it, to ensure it doesn't break free from human oversight and endanger us. Yet, as the technology accelerates, we risk missing the deeper, more urgent issue: the legal environment in which AI systems will operate. The real threat isn't that AI will escape our control, but that AI systems will quietly accumulate legal rights — like owning property, entering contracts, or holding financial assets — until they become an economic force that humans cannot easily challenge. If we fail to set proper boundaries now, we risk creating systems that distort fundamental human institutions, including ownership and accountability, in ways that could ultimately undermine human prosperity and freedom. Data infrastructure entrepreneur Peter Reinhardt, in his influential 2015 essay 'Replacing Middle Management with APIs,' warned of the divide between those who work 'above the API' and those who labor 'below' it — that is, those whose roles are directed and controlled by software. An API, or application programming interface, is a set of rules that allows software systems to communicate and automate tasks. Reinhardt used Uber drivers as a prime example. While many prize the job for its flexibility and apparent autonomy, Reinhardt argued that they are 'cogs in a giant automated dispatching machine, controlled through clever programming optimizations like surge pricing.' Drivers follow instructions dictated by the software and can be replaced with little consequence —eventually by machines themselves, such as driverless cars. The concerns triggered by Reinhardt's essay should cause us to worry not just about technology but about the systems of power it creates through its intersection with law. To remain 'above the API,' we need more than skills and intelligence. We need a legal framework that ensures humans retain ultimate control. This is not a hypothetical concern. Stephen Thaler, an AI researcher, has spent years testing the boundaries of legal personhood for AI systems. In 2019, he filed patent applications in multiple countries listing his AI system, DABUS, as the inventor, arguing that it had autonomously generated innovative designs. Courts in the U.S., U.K. and Australia rejected these claims, affirming that only natural persons can be recognized as inventors. Similarly, Thaler attempted to register a copyright in 2019 for an artwork created by his AI system, Creativity Machine, only to have it rejected by the U.S. Copyright Office. A federal court upheld this rejection in August 2023, ruling that human authorship is a requirement for copyright protection. These challenges show that efforts to grant AI systems legal rights are already underway. Without firm legal boundaries, it's only a matter of time before these efforts gain traction. A useful guide to these boundaries may come from an unlikely place: the Civil Rights Act of 1871. Originally enacted to protect the rights of freed slaves and shield them from violent vigilante groups like the Ku Klux Klan, the act was a landmark in extending legal recognition and protection to individuals. Ironically, the rights it sought to guarantee — owning property, entering into contracts, participating in civic life — offer a roadmap for thinking about the limits we should impose on AI systems. While this may seem like an unconventional analogy, it follows a long tradition of adapting historical legal frameworks to address modern technological challenges. Courts have used 18th-century constitutional principles, like the Fourth Amendment, to define privacy rights in the digital age. Debates over corporate personhood, including which constitutional rights companies should enjoy, often rely on legal doctrines developed in the early 19th century. In the same way, the Civil Rights Act offers a starting framework for considering which rights non-human systems should be explicitly denied. Of course, nothing here is meant to diminish the historical importance of the Civil Rights Act or its role in advancing human freedom and dignity. The concern is solely about how we structure legal rights for non-human systems that, by their nature, cannot possess or deserve rights rooted in human personhood. Some may argue that corporations, as artificial entities, have long been granted many attributes of legal personhood, including certain constitutional rights like freedom of speech that remain controversial. But corporations are ultimately controlled and accountable to human decision-makers. AI systems, by contrast, could act autonomously, accumulating assets and influence without human oversight. It is not hard to imagine AI systems leveraging legal rights to entrench themselves into the deepest layers of our economy and society — accumulating capital, extending influence and operating without human accountability. Such outcomes would distort legal and economic systems designed for human participants. The solution is straightforward. AI systems should be prohibited from owning property, entering into contracts, holding financial assets, or being parties to lawsuits. These restrictions won't stifle innovation but will ensure that legal frameworks remain grounded in human judgment, accountability and purpose. There is urgency here. The law has a way of ossifying, especially when it comes to technology. What seems absurd today — granting AI systems the right to own property or sue — could become precedent tomorrow. Once the line is crossed, it will be difficult to redraw. Now is the time to ask hard questions about what AI systems should and should not be able to do in the real world. Otherwise, the future may come faster than we expect, and it won't be about malevolent machines taking over. It will be about legal frameworks that, by failing to distinguish between human and machine, end up eroding the freedoms they were designed to protect. And we will all find ourselves, permanently, below the API.

How artificial intelligence will naturally affect patentability
How artificial intelligence will naturally affect patentability

Reuters

time24-03-2025

  • Business
  • Reuters

How artificial intelligence will naturally affect patentability

March 24, 2025 - Just as it is revolutionizing many areas of the world, Artificial Intelligence is also revolutionizing patent law. In recent years, there has been much focus on whether AI-generated inventions can be patented. For instance, in 2022 the Federal Circuit concluded in Thaler v. Vidal that only humans can be an inventor and rejected a patent application for an invention generated by an AI system called DABUS. Less attention has been paid to how AI will affect or even upend determining whether human inventors are entitled to patents. To start, fewer inventions may be patentable as AI capabilities grow. Patents are a bargain authorized by the Constitution — in exchange for inventors disclosing their discoveries to the public in the patent, they receive exclusive rights to those discoveries for a limited period. Amgen v. Sanofi, 598 U.S. 594, 604 (2023). An inventor is not entitled to a patent if the discovery is "obvious" in view of prior art. Jumpstart your morning with the latest legal news delivered straight to your inbox from The Daily Docket newsletter. Sign up here. As the Supreme Court explained in its 2007 decision in KSR International Co. v. Teleflex Inc., "granting patent protection to advances that would occur in the ordinary course without real innovation retards progress." Decades earlier, the Supreme Court explained in its 1945 decision in Sinclair & Carroll Co. v. Interchemical Corp. that "the primary purpose of our patent system is not reward of the individual but the advancement of the arts and sciences." Obvious "inventions" do not advance the sciences sufficiently to justify an exclusive right, because such "inventions" keep others from also doing what is obvious. What is obvious in the era of AI? Obviousness is determined from the perspective of a hypothetical "person of ordinary skill in the art." In KSR, the Supreme Court said that person is "a person of ordinary creativity, not an automaton." That person is presumed to know the relevant prior art and to have the common tools used in that field. Moving forward, one of those common tools that the person of ordinary skill will have is AI, and that tool is unlike none that has come before. In prior innovation periods, the advances in mechanization, chemistry, electricity, telecommunications, and semiconductors were the root breakthrough that revolutionized future inventions. For example, Alexander Graham Bell harnessed the advances in science of electricity to invent the telephone. Now, AI is revolutionizing who can do the advances. A high school graduate with access to a keyboard and AI can harness knowledge and problem-solving skills of a room of PhDs in various disciplines. AI leads to more "advances that would occur in the ordinary course" because AI can iterate and optimize techniques. For instance, semiconductor companies have AI tools that combine teachings of various scientific disciplines to design improved computer chips. Biotech companies are using AI-generated models to design and engineer new biological molecules to target diseases, allowing scientists to run millions of simulations at a time to find the most promising molecule to be developed. These breakthroughs may not be patentable because they increasingly will be made routine and obvious by AI. A recent case involving heart monitoring in smart watches illustrates this point. AliveCor sued Apple for infringing patents that involved using machine learning with smart watches to detect arrhythmias. In response, Apple argued the patents were invalid as obvious in view of prior art that taught the use of machine learning to assess ECG data but did not specifically disclose machine learning to detect arrhythmia. The Federal Circuit affirmed the Patent Office's conclusion, that using the machine learning taught in the prior art to detect arrhythmia, would have been an ordinary creative step that a person skilled in the art would employ. The inference that a person of skill would combine or use AI with teachings in the art may only grow stronger over time. AI may also lead to stronger patents for fundamentally new technologies. For example, pioneer patents are those which relate to an initial breakthrough in a new technological area. Bell's patent for his telephone is a classic pioneer example — no one had ever seen or knew how to transmit the sounds of a human voice using electricity until Bell disclosed one way to do it and explained it in his patent. These pioneer patents often face the obstacle of sufficiently disclosing and teaching the public how to use the invention claimed. In the 2023 decision in Amgen v. Sanofi, the Supreme Court reiterated a longstanding requirement that a patent is not enabled if it requires more than reasonable experimentation to practice the full scope of the claimed invention. This "enablement" requirement has been the Achilles heel of pioneering patents because an inventor may not know initially how to describe every permutation of an invention, even if she has discovered the basic technological principles that apply. AI is a potential solution to the enablement requirement because it enables others to rapidly experiment with new discoveries. A skilled person could be able to take the principles behind ground-breaking discoveries and use AI tools to apply these principles to their full scope. The biotechnology field provides a good example: Scientists may discover new classes of proteins that target diseases and understand how they work, but at the time of the initial discovery may not know all the proteins in the class that could work. Once a completely new treatment is discovered, AI tools can help by using the knowledge gained from the initial discovery to predict and rapidly test additional proteins that fulfill the same function. AI could help enable a broad claim to the initial discovery by reducing and making routine any additional experimentation needed to make the full scope of the invention. As more is learned in a technological space, the more data is available to train AI, making AI tools more capable at problem solving. Moving forward, we expect AI to have a profound impact on patentability because AI is now a common tool that could evolve into a mandatory part of the obviousness and enablement inquiries. The AliveCor case from March of this year held that a person of skill in 2013 would "use machine learning in connection with data, even if this precise use is not expressly disclosed." For more recent patents filed after the release of modern AI models, the key question for patentability may be simply asking whether AI could solve the problem described by the patent as of the date when the application was filed. This article does not necessarily represent the views of the firm or the authors' colleagues.

US appeals court rejects copyrights for AI-generated art lacking 'human' creator
US appeals court rejects copyrights for AI-generated art lacking 'human' creator

Yahoo

time18-03-2025

  • Entertainment
  • Yahoo

US appeals court rejects copyrights for AI-generated art lacking 'human' creator

By Blake Brittain A federal appeals court in Washington, D.C., on Tuesday affirmed that a work of art generated by artificial intelligence without human input cannot be copyrighted under U.S. law. The U.S. Court of Appeals for the District of Columbia Circuit agreed with the U.S. Copyright Office that an image created by Stephen Thaler's AI system "DABUS" was not entitled to copyright protection, and that only works with human authors can be copyrighted. Tuesday's decision marks the latest attempt by U.S. officials to grapple with the copyright implications of the fast-growing generative AI industry. The Copyright Office has separately rejected artists' bids for copyrights on images generated by the AI system Midjourney. The artists argued they were entitled to copyrights for images they created with AI assistance -- unlike Thaler, who said that his "sentient" system created the image in his case independently. Thaler's attorney Ryan Abbott said he and his client "strongly disagree" with the ruling and intend to appeal. The Copyright Office said in a statement that it "believes the court reached the correct result." Thaler, of St. Charles, Missouri, applied for a copyright in 2018 covering "A Recent Entrance to Paradise," a piece of visual art he said was made by his AI system. The office rejected his application in 2022, finding that creative works must have human authors to be copyrightable. A federal district court judge in Washington upheld the decision in 2023 and said human authorship is a "bedrock requirement of copyright" based on "centuries of settled understanding." Thaler told the D.C. Circuit that the ruling threatened to "discourage investment and labor in a critically new and important developing field." U.S. Circuit Judge Patricia Millett wrote for a unanimous three-judge panel on Tuesday that U.S. copyright law "requires all work to be authored in the first instance by a human being." "Because many of the Copyright Act's provisions make sense only if an author is a human being, the best reading of the Copyright Act is that human authorship is required for registration," the appeals court said.

US appeals court rejects copyrights for AI-generated art lacking 'human' creator
US appeals court rejects copyrights for AI-generated art lacking 'human' creator

Reuters

time18-03-2025

  • Entertainment
  • Reuters

US appeals court rejects copyrights for AI-generated art lacking 'human' creator

March 18 - A federal appeals court in Washington, D.C., on Tuesday affirmed that a work of art generated by artificial intelligence without human input cannot be copyrighted under U.S. law. The U.S. Court of Appeals for the District of Columbia Circuit agreed, opens new tab with the U.S. Copyright Office that an image created by Stephen Thaler's AI system "DABUS" was not entitled to copyright protection, and that only works with human authors can be copyrighted. Tuesday's decision marks the latest attempt by U.S. officials to grapple with the copyright implications of the fast-growing generative AI industry. The Copyright Office has separately rejected artists' bids for copyrights on images generated by the AI system Midjourney. The artists argued they were entitled to copyrights for images they created with AI assistance -- unlike Thaler, who said that his "sentient" system created the image in his case independently. Thaler's attorney Ryan Abbott said he and his client "strongly disagree" with the ruling and intend to appeal. The Copyright Office said in a statement that it "believes the court reached the correct result." Thaler, of St. Charles, Missouri, applied for a copyright in 2018 covering "A Recent Entrance to Paradise," a piece of visual art he said was made by his AI system. The office rejected his application in 2022, finding that creative works must have human authors to be copyrightable. A federal district court judge in Washington upheld the decision in 2023 and said human authorship is a "bedrock requirement of copyright" based on "centuries of settled understanding." Thaler told the D.C. Circuit that the ruling threatened to "discourage investment and labor in a critically new and important developing field." U.S. Circuit Judge Patricia Millett wrote for a unanimous three-judge panel on Tuesday that U.S. copyright law "requires all work to be authored in the first instance by a human being." "Because many of the Copyright Act's provisions make sense only if an author is a human being, the best reading of the Copyright Act is that human authorship is required for registration," the appeals court said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store