This may seem like a trivial or odd question to you. The obvious answer is that the person who created the AI should also own the rights to any creations produced by that AI, unless those creations have been explicitly given or sold to someone else.
As a general policy, this may seem reasonable given the societal norms to date. If you create something that provides value to someone else, you keep the compensation for the value provided, even if you no longer directly provided that value. Examples of this can be seen all across the internet where you access websites that provide services to you and you compensate the creators/owners of the website, not the website itself (even just the thought of compensating the website itself sounds ridiculous).
So does that imply that with the advent of AI, the creators of those AIs will also be the owners of the creations of those AIs? Let’s explore some of the logical, social, and potentially legal consequences that could come from that assumption.
Is The Creator Responsible For Everything the AI Can Do?
The first thing to know about AIs is that more often than not they don’t learn solely of their own accord. There are many cases in which AIs can learn entirely from novel data never specifically specified by the human creator, but in many complex tasks, the use of real-world data is required for the AI to be able to learn a given task. Using self-driving cars as an example, Tesla’s cars use information about real-world driving behaviors to teach their AIs how to drive correctly and safely on roadways. Although some of this can be modeled using driving simulators, the use of real-world data is still a requirement for AI to correctly understand the world.
When training AIs, it is important to provide them with lots of data to learn from. To date, Tesla’s AIs have over a million hours of real-world driving data to work with, which opens it up to some exceptional learning of all kinds of edge cases in driving.
The next question you might have is where does all of this data come from? As an individual company, it would cost them an exorbitant amount to try and collect all that data themselves, so the easier solution is to outsource it, specifically to the people that buy Tesla cars. Yes, that’s right, the people that buy and drive the cars are providing Tesla with the needed data to further train their AIs.
The hidden ethical question here is that if the AI being created is based on real-world driving behaviors of people who bought a car, who owns that data? And by proxy, does the AI really belong to Tesla if it was created using the data of people who bought one of their cars? Although Tesla may have put the pieces together to get the AI to work, at the heart of the AI is the personal data of everyone that contributed to it through their driving behaviors.
Generated by a text-to-picture AI with the inspiration “The Machines Are Taking Over the World”
The Moral Perspective
In the field of research, the nearly universally agreed-upon standard for any research conducted involving the collection of participant data is that the participant must be informed in order to be able to consent. This moral principle stemmed from questionable research conducted in the 50’s and 60’s (and likely well before that as well) that allowed for some of the most controversial and inhuman experiments to happen, leaving some scarred with physical and psychological trauma that would stay with them for the rest of their lives (or in extreme cases, cause premature death). Since then, review boards were put in place to protect participants from unethical or unneeded research experiments.
Given this ethical shift in the field of research, it is clear that technology has not yet caught up. There are many cases where technology can and does track your data, often without informed consent and sometimes without your knowledge at all. If the same ethical standard was being used universally, then there would likely be a greater need to have users be informed of data being collected.
The requirement behind informed consent is that it is in fact informed. If someone were to agree to participate in a study before knowing about the study or any of the risks associated, it would no longer be informed consent in that there is no way for the participant to properly access the risk to them. Morally, the same would apply to legal documents involving data collection. Although someone may sign and agree to a legal document, it doesn’t provide any moral standing given that the signer may have no knowledge of the contents of the document.
Point is that unless the data that is being collected to later use in the creation of an AI is explicitly disclosed to the person providing the data, then it’s likely not an ethically sourced AI. The creation of that AI required pieces from other people that at the extreme end would entitle the data providers to at least a piece of that at, and on the less extreme end the ownership of that AI wouldn’t be solely the creator of the AI
Legal Perspective
Although the practice of law itself is a highly complex topic, much can be induced about the current state of law based on what some major corporations are currently doing and what the historical precedent has been when it comes to laws about novel technology.
Unfortunately, the legal system is not capable of keeping up with the advancement of technology. This is at no fault of its own, in fact, is likely the better way to conduct legal matters in that there should be careful consideration of the outcomes and consequences of a given law before being put in place (which ideally prevents the law from being used as a tool against others you may just disagree with). However, this strength also becomes its weakness when it is no longer able to keep up with the fast changes of technology, allowing for the exploitation of the system in cases where something may be deemed as morally wrong or unjust, but has not yet been addressed in any legal capacity.
In this sense, technology has already been exploited (and continues to be exploited) through the use of personal data collected that may or may not have been agreed to by the party involved. Think about any digital service you may use where you had to create an account. The mere creation of the account has given the owner of that service the power to collect data about you, through whatever information you provide on your account as well as through your behaviors while interacting with that service. In some cases, it goes beyond interactions with that service alone, in that it can pull information from other services on your device.
Some may point out that (legally speaking) you have agreed to this data collection through the creation of the account or the use of the service. However, (given our previous moral stance above) ethically we can see this is likely still a morally wrong position to take.
Despite this morally questionable way of conducting digital services, it is clear that there is little to no legislation specifically preventing organizations from doing so. In the coming years, more detailed information about individuals will continue to be collected, up until the point of potentially knowing more specific details about you and your personal information than maybe even you know. Where this will become interesting is when companies begin to start creating AIs that can emulate even more complex human behavior. At the point in which a company successfully creates an AI that could act or behave as you do, will that company still own that AI? If the AI were designed to behave in ways that only you behave, and it did it using your data, would that company then own a part of you given that it was built on your data?
For now this may sound like science-fiction, but unfortunately, the day could come when fiction becomes reality and we’ll have to make a decision as to the limits of AI ownership.
Generated by a text-to-picture AI with the inspiration “The Future of AI”
What The Future Will Hold
At some point, the question of ownership will be taken even further when the first AI builds its own AI. Given the quickly advancing technology allowing for faster and more powerful computers, it will only be a matter of time until the first AI is created that can fully create its own AIs, being able to instruct those AIs to achieve a given goal, and from there we reach a chain reaction of creation in which humans need no longer apply themselves. At the point where an AI creates another AI, who will own that AI? Given that the AI will be able to perform a task that the original creator is entirely unaware of, would that creator still be entitled to compensation? If the AI is just an algorithm that we can replicate, and the only resources needed to be consumed by those AIs is electrical power, is the owner of those AIs just the person who pays the power bill?
The point of these questions isn’t to fantasize about what the world may be. The future is unknown and will stay that way until we get there, but it is to prepare ourselves for what it will likely hold and some of the hard decisions yet to come.