Abstract
Human-AGI relations are soon going to be a subject to number of policies and regulations. Although most current Blue Sky de lege ferenda postulates towards robot and artificial intelligence regulatory framework are focused on the liability of the producer or the owner of the AI based product, one might try to conceptualize the legal relations and rules for the coexistence between humans and an anthropocognitive AI’s (AGI) possessing proper capacity. The main purpose of this article is to explore the possibility of applying the principles of Metalaw to mentioned relations. The scope shall consider a non-chattel and non-property based status of those types of AIs, as well as sufficient advancement of such entities, or the emergence of advanced non-human based intelligence.
1 Introduction
Laws are made by humans. That sentence sums up the anthropocentric approach to human legal sphere and how human made laws place their center of gravity. Whether it will be regulations concerning air travel, mining, commerce, firearms or drones, the basic principle revolves around laws being made by humans and with human safety in mind. Currently even regulation surrounding artificial intelligence based systems, whether it’s Big Data driven AIs, IoT systems or autonomous vehicles, every regulation puts safety of human owners, users and bystanders first. For the majority of human history it was a the basic function of the law, to protect its subjects and actors from violence, malice, damage and harm done by other actors. Both sides of the legal dispute were humans. Sometimes those humans represented humans of lesser legal standing or organizations they were responsible for. Humans were also responsible for the well being and actions and damage done by animals and eventually machines. But in an environment where machines are given legal standing and personhood, there is not much certainty toward the shape and form of relations between two forms of sentient actors. Thus, this article seeks to look into current trends in Artificial Intelligence Law that revolve around robot and AI agency and try to apply the principles of Metalaw to them.
Metalaw is a concept from the early doctrine of international space law. Its purpose was to create an interplanetary equivalent to international law, based on the principles of natural law, which was structured around a common denominator. Andrew G. Haley coined the term Metalaw in 1956, which centered around the Golden Rule, has been based on the Kantian categorical imperative [1, 2]. Metalaw is currently used as an umbrella term for similar concepts developed in later years by Ernst Fasan [3] and other space law scholars. The concept was conceived in times prior to the launch of Sputnik on the 4th of October 1957 and any Martian or Venusian flyby mission, thus it was thought of as set of principles for first contact and mutual coexistence between civilizations in outer space. While later developments in technology and space exploration ruled out civilized Venusians and Martians, metalaw is still discussed as part of SETI (Search for Extraterrestrial Intelligence), and had an impact on the state of present day international space law. Metalaw presents a fare different approach than most present day posthumanist or transhumanist approaches to “non-human” actors in law. It is based on mutuality and equal legal standing between parties. The point of this article is to find if principles of metalaw outlined by Haley or Fasan would fit Human – AI relations. In principle, Metalaw is a much older concept than AI law, and follows different philosophy than laws designed for robots. Whilst robot laws are in majority based on positive law, metalaw tries to find universal rules, that would work as basis for any contact and interspecies relation. It is reasonable to take a look at the idea of human coexistence with sentient machines from a metalegal pointof view, which proposes an equal and equitable treatment to all parties of metalaw.
In this context we have to clearly distinguish present day AI from the potential Artificial General Intelligence. Present day AI or Machine Intelligences are purpose driven programs, mostly deep learning artificial neural networks used for specific tasks and emulating feats of human cognition. The potential AGI is one of the goals of AI researchers which purpose is not to merely automate calculation and narrow cognitive functions. The goal of AGI researchers and architects is to create a multipurpose artificial brain, capable of running both high and low cognitive functions simultaneously able to emulate reason, judgement and emotions in a manner resembling a human being [4].
The majority of works on AI law, Metalaw and especially AGI are theoretical or speculative, due to the nature of their subjects. The lack of empirical data stems from the fact, that neither have humans entered into contact with Extraterrestrial Intelligences, nor has any human-like artificial intelligence been created. Furthermore, AI law or robot law has been a speculative discipline of law studies, concerned with de lege ferenda postulates on the person-hood of robotic or computer legal actors, possessing different levels of intelligence and ability to reason like adult or adolescent human beings. It was not until recent development in AI, such as autonomous cars, facial recognition, chatbot-artificial agents, avatars and big data systems, that the field of AI law gainedbroader interest and became part of present day legislation on personal data protection or abusive profiling. Thus the main focus of present day AI law does mostly concern programmable software agents (virtual assistants), deep learning algorithms and robotic platforms, rather than robot personhood and robot rights, although theories and speculations on non-human legal actors are still part of the AI law discipline. As for Metalaw, it shares a similar speculative history with AI law, where it was part of then not codified yet international space law. Although it was conceived as a proposed legal framework or a philosophy of law, stemming from natural law, it’s principles can be found reflected in papers discussing relations between humans settled in different parts of the solar system, or even general relations between humans and the future transhumant entities. Due to these facts, this article is based on both grounded academic research as well as public discourse surrounding those two legal and technological topics
2 Artificial intelligence law – regulatory models regarding sentient AIs
Artificial intelligences and robots are presumed to be nonhuman agents, who by their actions perform a subservient role to that of a human or humans in general. While most of current works and research concerning AI [5] are focused on specialized and purpose driven programs and platforms, a truly autonomous and general artificial intelligence will pose several problems and ethical concerns. Most of the concerns surrounding AIs are linked to keeping humans safe from any harm that the misuse or malfunction of those programs might pose. At the current state of development, the AI research is less concentrated on creating a Faustian homunculi, but rather a pocket or ubiquitous servant, that would obey and safely carry out our every command given by a human user. That is not to say that such approach lacks support in academic thought, yet AI researchers cannot assume the precise time when human-robot relations would gradually change from user-tool to that of beings on equal legal and social standing.
A review of literature shows us that scholars take on Asimov’s Three laws of robotics as a reference point for further academic elaboration on laws guarding humans from harm, caused either by misconduct or malevolence carried out by an artificial “life form” [6]. However, the laws of robotics are mostly used as a disposable booster rocket, which help to drive the discussion. After their initial presentation, they are discarded upon their deeper elaboration, due to their simplistic nature and the role that they’ve actually played in Asimov’s stories.
Provisions of Asimov’s laws grant the following: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. In addition, a 4th law was presented in the further installment to the Foundation. It is commonly known as the 0th Law: A robot may not injure humanity, or, by inaction, allow humanity to come to harm.
It is worth noting that although the general principles of robot laws sound reasonable, they were supposed to be broken by said robots. The purpose of showing how a robot might end up breaking the laws of robotics was to show the fallacies of generalized regulations and how reasoning might be an obstacle in abiding the Laws. Based on the failings of the original robot laws, Robin Murphy and David D. Woods proposed their own Three Laws [7]. These proposed laws have the following provisions: 1) A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics. 2) A robot must respond to humans as appropriate for their roles. 3) A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws
It is clear from the mentioned proposal, that these robots shall be designed with only a “sufficient autonomy”. This perfectly aligns with a stated general rule, that robots are a product introduced (“deployed”) by a human to the market. These robots shall also possess (be also equipped with?) a built-in set of “regulators”, inhibiting their actions to pose any negative ethical or legal outcomes. Therefore, Murphy and Woods see the robots as a part of a “human-robot work system”, thus not partners to humans.
Another example of referencing and building upon Asimov’s Three Laws is given by the Engineering and Physical Sciences Research Council. The EPSRC elaborates even further on the body of work presented by Woods and Murphy, providing us with Ethical Principles for designers and builders of robots: 1) Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. 2) Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. 3) Robots are products. They should be designed using processes which assure their safety and security. 4) Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. 5) The person with legal responsibility for a robot should be attributed. [8]
These principles provide a further exploration of Asimov’s Laws, given that they explicitly state the following: robots are products; robots are manufactured artifacts; it is the human that is responsible for actions of the robot.
Therefore, even presented with sufficient autonomy, robots are designed to serve humans. Protecting humans from harm by error or design of the robot is given a high priority in those principles. The EPSRC principles are followed by five High Level Messages:
We believe robots have the potential to provide immense positive impact to society. We want to encourage responsible robot research.
Bad practice hurts us all.
Addressing obvious public concerns will help us all make progress.
It is important to demonstrate that we, as roboticists, are committed to the best possible standards of practice.
To understand the context and consequences of our research, we should work with experts from other disciplines, including: social sciences, law, philosophy and the arts.
We should consider the ethics of transparency: are there limits to what should be openly available?
When we see erroneous accounts in the press, we commit to take the time to contact the reporting journalists.
These messages are not directly linked to the creation or operation of robots themselves. They should be seen either as ethical pillars for the people engaged in the development and supply of robots, or as supplementary guidelines for responsible roboticists. Especially message 7 should be given consideration. In the times of erroneous journalism or the spread of fake news, there might be instances of moral panic, where a journalist over-exaggerates an incident or tries to convince the public that a robot “has rebelled.” The messages also put an emphasis on the issue of transparency in AI and robotics, thus creating a starting point for discussing the balance between openness and business practices in robotics.
Another incarnation of Asimov’s Laws comes from Satya Nadella, the former CEO of Microsoft Corporation. He has stated in an interview for the Slate, that he has observed several ethical principles that are being pursued by Microsoft’s AI developers: 1) AI must be designed to assist humanity; 2) AI must be transparent; 3) AI must maximize efficiencies without destroying the dignity of people; 4) AI must be designed for intelligent privacy; 5) AI must have algorithmic accountability; 6) AI must guard against bias. [9]
However, Nadella points out that, in his opinion, there are several “musts” for humans in order to stay relevant. Those musts are a set of skills that are in this case unique to humans: Empathy, Education, Creativity, Judgment and accountability.
Nadella’s ethical principles show us the modern approach to the ideas presented in Asimov’s Three Laws. This approach is better suited to the actual state of AI, while Asimov’s works were published decades before robots and AIs gained enough presence in the public space, not to say about being used as product or service. The first principle explains the purpose of other principles, by stating, that an AI must be designed to assist humanity. Other points are different approaches to what we consider safety measures. Most earlier elaborations on Asimov’s laws, as well as laws for AIs and robots based on Asimov’s template, considered mostly physical harm caused by a robot or AI. However, Nadella’s principles for AI designers are set to also protect human privacy and dignity and guard machines from biased treatment of humans. Those principles are followed, as presented by set of skills that a designer must possess in order to design a safe and assisting AI. One might doubt that those skills are unique to a human being, yet at present, the only agents running the society and designing machines are humans.
The aforementioned sets of rules have one factor in common, that is the dominance of humans over robots and AIs. Subordination of robots and the emphasis on protecting human lives and privacy is not a basis for relations between two groups sharing the same space in a society. It is more of a recognition of responsibility of the robot’s manufacturer or owner for any harm or damage done to a human. Most of these proposals are human-centered, and that anthropocentric approach might provide the backbone for inhibiting the development of a more biomimetic Artificial General Intelligence[1].
On the other note, Mark W. Tilden proposed his three rules for simple robotics, which might actually be more progressive in this manner than all former proposals. I) A robot must protect its existence at all costs. II) A robot must obtain and maintain access to its own power source. III) A robot must continually search for better power sources.
What Tilden proposes is creating basic laws for robots, that are not focused on human safety. Those rules are concerned only on the sustainability and self-development of robots. Pop-culture has presented us with scenarios which include emergence of robot-vampires, hunting humans for energy sources, or farming humans like batteries[2], or to develop into some form of Grey Goo phenomenon [10]. Out of this the only academically plausible scenarios are the grey goo,which has been presented by Nick Bostrom as the “Paperclip maximizer” [11]. However, both those scenarios require a simple program that can be repeated at infinitum by a non-sentient machine.
3 AGI and strong AI
The creation of a human level general intelligence is one of the early promises and also fears of modern day popular and intellectual culture. Concepts of achieving an AGI by directly mapping the human brain has been theorized by scholars in the past [12]. Currently ongoing projects like the Human Brain Project [13], Open Worm Project [14] or Blue Brain Project [15] are sometimes being pointed to as the pathway to creating AGIs via whole brain emulation [16]. For the purposes of our analysis, however, we shall refrain from discussing emulated entities, mind uploads and mind-clones [17]. The focus of this paper shall by only human-made or emergent AGIs or Strong AIs.
An Artificial General Intelligence [18] is a program which possesses general intelligence akin to that of an adult human being. Other features of this program should include the capacity for self-awareness, sapience, sentience, and consciousness. Generally speaking, the purpose of the pursuit of AGI is creating a learning AI, not limited to being a problem solving savant, focused on preprogrammed tasks, but rather able to develop traits corresponding to human like intelligence. However, some point out that a self-programing AI would require “no less than a full automation of the human developer’s high brain functions.” [19].
John Searle describes a concept he names Strong AI in the following words:
I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI (Artificial Intelligence). According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states. In strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations. [20]
Although there is a blurred line between an AGI and a Strong AI, it mostly boils down to creating quantifiable factors that might distinguish preprogrammed sapience and sentience of an AGI in a form of a Friendly AI from real sapience and sentience developed by the machine. It stands to reason that a sapient AGI should be treated as a living being, if not as a human being. There is no certainty if the state of equaling the human general intelligence and human emotional capacity will be brief, due to the proposed concept of intelligence explosion [21]. However, AI researchers point out that at some level, machines may become moral agents [22], and as such the concept of recognizing them as beings on a similar standing to humans rather than tools becomes valid.
The embodiment hypothesis plays a major role in discussing the computational or complexity endowment required for a system in order to achieve human or animal like self-awareness. This would also require the possession of a properly complex sensory apparatus or virtual input collectors.
The concept of AGI bears also the problematic anthropocentric approach to General Intelligence [23]. It can be seen especially in the use of human-like intelligence, in comparison with goal oriented AIs and trained systems. This gap in terminology arises from the preconception that provides a special status to human cognition. Simultaneously it has to be noted that human technophilosophy, like transhumanism, frequently speaks of uplifting nonhuman animal intelligence or creating a human-like artificial intelligence, where both the animal and artificial intelligence have reach to the threshold of “human-level”, before they can be recognized as surpassing that level.
4 Current approach
The problems with such approach in modern legal and philosophical thought is the inherited hostility towards any supposedly more powerful entity that might rival, and thus threaten humanity. Bostrom and Yudkovsky have published successful books and papers on the possible risks and threats posed by Superintelligences and AGIs. A popular concept among transhumanists and futurists alike is the Technological Singularity. The term popularized by Vernor Vinge [24], which stems from the works of John von Neumann and I.J. Good, got a new life thanks to Ray Kurzweil.
There is a lot of discussion, or rather speculations based on projections addressing the possible aftermath of the singularity event. On the one hand there are people, especially Singulatarians, embracing the event as a turning point in history. According to the Singulatarians, an AI which surpassed human level intelligence and is followed by Intelligence Explosion, will save humanity from its problems and biological constraints. Their critics tend to point out that this may also mean end to humanity as we know it. Less Wrong is best known for their dire concept of a malevolent AGI in the form of the “Rocco’s Basilisk” parable. Other groups in the technoprogressive or technodemocratic circles view AIs as slaves to humanity, thus as mentioned before, strive to create a subservient AGI [25].
In the legal writings on the subject of AI, the idea of coexistence of Humans and AIs/AGIs is presented by very few authors. For instance, Gabriel Hallevy proposed ideas for treatment of autonomous artificial intelligence entities under the criminal law in his work [26]. Hallevy has some valid points regarding the problem of criminal liability of an autonomous AI and making autonomous AIs obey the law. We should assume that a human-like AI might chose not to abide by the laws for some peculiar motives, or that certain drives might lead it to ignore any kind of safety or potential liability [27]. In this case, an AGI equipped with a human-level general and behavioral intelligence should be able to view the legal system as a social and cultural construct instead of following a preprogrammed set of imperatives and guidelines. Following preprogrammed rules would be an attribute of weak AI or a classical subservient image of a robot. Assuming that the main function of the Law is protection of the vulnerable against the potency of the strong, it should be held in the highest regard when dealing with AGI. According to the aforementioned principle, humans require legal protection against any damage that might be a result from actions of an AGI/Strong AI. Whether it will be physical harm, psychological trauma, loss of assets and property, personal data mishandling or social exclusion, humans are vulnerable to actions of an AGI. Furthermore, there comes the issue of the potential “lack of suffering” on the side of AGIs. If illegal or harmful actions of an AGI cannot be inhibited by anything other than brute force prevention or a “kill switch”, then there is something awfully wrong about our legal culture. This problem comes into contrast with transhumanists advocating the eradication suffering, mortality and aging, when at the same time those factors are being part of our penal culture. Still even in some of the more moderate advocates of the technoprogressive community try to calm down the public’s fears of rampant AI takeover or loss of control over the “Frankenstein’s monster” or the mythical Golem by talking about “kill switches”. Tales of intelligent creations running amok are prevalent throughout human culture. Furthermore, humans may be persuaded against committing an act of crime through fear or sheer calculation due to the severity of the punishment. Such punishment may include forfeiture of possessions, monetary fines, arrest, and imprisonment. But humans cannot be certain that a Strong AI would be susceptible to such notions. Without pain receptors, fear of missing out, the cultural significance of material or monetary possession or the passage of time itself (which is heavily connected with aging and the severity of imprisonment), an AGI might not be eager to obey the law or respect the terms of an agreement on other principles than sheer philosophy and morality. Thus, some view the kill switch as the only solution and only insurance for humans against a powerful, sentient, non-corporal entity. That is the only answer a human society comes up with, completely ignoring the issue of intentionality or the position of responsibility in autonomous sentient AGI [28]. The problem with a Strong AI, able to feel emotions, might actually come from fear of harm or death, that human actions, intentions or communication might entail. A sentient being can live with the notion that it will gradually age and die, even though there is a possibility that death can come quicker and in a more violent way. However, living under the proverbial gun and in the state of constant fear, where suddenly someone can use that kill switch or other weakness to incapacitate, kill, or threaten the AGI, is a highly traumatizing experience. In fact, present day concepts regarding the legal protection of human rights and problems with violence in society point out that society needs to mitigate problem by dealing with its sources and systemic roots. Thus, it might be viewed utterly inhumane to enforce solutions such as a “kill switch” or developmental inhibitors on non-human sapience possessing human or equivalent mental capacity.
Then again, one might view the basic non-corporeality, lack of mortality or lack of psychological inhibition as a form of privilege and advantage of the artificial agent. Introduced to the human legal environment, this new actor might incite fear and a sense of insecurity among the populations. In Encountering an AGI, certain level of disadvantage in might be faced by human actors, especially those of lower social standing.
5 Metalaw
There is an academic field concerning the legal aspects of post-terrestrial human and non-human coexistence. Metalaw is a concept developed from natural law by Andrew G. Haley. Its purpose was to create common set of rules for every intelligent civilization that would extend beyond Earth’s gravity well. Metalaw considers, that every intelligent life in outer space has some common denominator with other forms of advanced life-forms and civilizations. The basic principle of Metalaw is the Interstellar Golden Rule, stating: “Do unto Others as You Would Have Them Do unto You.”. Andrew G. Haley, besides being highly involved in the development of the basic principles of Space Law and the founder of The International Institute for Space Law (IISL), was also very interested in the possibility of finding and contacting civilizations inhabiting other worlds. Haley recognized that humanity must prepare itself “to deal with intelligent beings who are by nature different in kind and who live in environments which are different in kind. Although these propositions open areas of juridical speculation, it is sufficient at this time to establish the simple proposition that we must forego any thought of enforcing our legal concepts on other intelligent beings” [29].
There were also other approaches to the subject of Human-ET relations. Bueckeling considered the notion that there might be different kinds of ET civilizations, but only those that possess the capacity for free will (i.e. not insect-like swarm intelligences) should be viewed as partner to metalaw. Therewere also a number of scholars, who referred to their concepts as “interplanetary law” or by other terms and with different approaches to Human-ET relations. Faria, for example, states that there is a necessity for right to self-defense against malevolent acts from other intelligences in the law of interplanetary cooperation [30]. Verplaetse however paints a more vicious picture with his concept of recognition of planetary sovereignty: "If the planets are inhabited, sovereignty may be established only in two ways: By a victorious war or by agreement. War is and will always be the first origin and the ultima ratio. Sovereignty means power and ultimately military amid technical power, whatever may be the means amid ways. Agreement would be the acceptance by time inhabitants of the rule of the conquerors. The hypothesis of mutual sovereignty is practically excluded as the superior group would necessarily dominate” [31]. Woetzel finds that domination and enslavement are to be viewed as contrary to generally accepted precepts of the proposed law[32]. However, Ernst Fasan, an Austrian legal scholar, has greatly expanded Haley’s concept of Metalaw. He created eleven rules for Metalaw which are drawn from the Interstellar Golden Rule. These rules present themselves in the following order (the order Fasan sees proper to be drawn):
No partner of Metalaw may demand an impossibility.
No rule of Metalaw must be complied with when compliance would result in the practical suicide of the obligated race.
All intelligence races of the universe have in principle equal rights and values.
Every partner of Metalaw has the right of self-determination.
Any act which causes harm to another race must be avoided.
Every race is entitled to its own living space.
Every race has the right to defend itself against any harmful act performed by another race.
The principle of preserving one race has priority over the development of another race. In case of damage, the damager must restore the integrity of the damaged party.
Metalegal agreements and treaties must be kept.
To help the other race by one’s own activities is not a legal but a basic ethical principle.
For the further elaborations, we will mostly refer to Fasan’s 11 principles of Metalaw, and some iterations regarding robot-machine based life forms.
Robert A Freitas proposed that Metalaw, or as Korbitz calls it “Celegistics” [33] should include the principle of thermoethics. Thermoethics is based on the concept of negentropy, that is a decrease in entropy by creating value and not destroying. This creates the concept of Collorary Negentropic Equality, which states that entities have equal negentropic right and responsibilities. Thus, as Freitas writes, the more negentropy an intelligence creates, the more rights and more responsibilities it has. Freitas follows in the footsteps of Fasan creates his own negentropic rules, or canons as he calls them:
Canon I: Actions that increase the entropy or disorder of another society should be avoided.
Canon II: Every society holds its negentropy (or information) in trust for all intelligent beings, and should do everything possible to preserve it.
Canon III: Actions that increase the negentropy or order of another race should be carried out.
These, however, are mostly principles of survival and expansion, not principles of non-violent coexistence or cooperation, even though these might be read between the lines. As it is also understood in legal academia, general legal principles have a tendency to be easily over-interpreted. This is one of the major problems of present day international space law, where concepts of utilization, appropriation or celestial body are not properly explained in the acts of law, creating confusion among academics, diplomats and policymakers alike.
6 The difference between ETI and AGI
While discussing the applicability of Metalaw to AI law, especially regarding AGI, one must bear in mind the physical background behind developing space law and metalaw.
First of all, any form of extraterrestrial life or even intelligence was viewed as naturally occurring. To be more precise, a form of intelligence which arose without human intervention. The possibility that the ETI will take the form of “postbiological” beings [34] or that the contact will be established and carried out using AI probes or even Von Neumann probes [35] has been discussed in academic circles of SETI. However, in the case of metalaw the “form” or “embodiment” of ETI was less relevant than the issue of its existence. As mentioned before, some scholars have raised concerns as to where to draw the line for sovereignty or political capacity for Extraterrestrial lifeforms. The question of possibility for a colonial organism to possess sentience and therefore the “metalegal capacity” was still related on the notion of its extraterrestrial origins.
AGIs and AIs are generally given a different treatment. They are either man-made or emergent from the human technosphere. As such, they would be related to human activity or its civilization. For example AGIs would deeply rely on energy and IT infrastructure established by humans. Deep learning algorithms and neural networks were breed on human culture and human activity and feed the same energy as our non-intelligent appliances.
Second factor that needs to be taken into account is the territory which either party occupies. In the case of ETI, they would occupy a different celestial body, thus being “avoidable” to humans. Not making contact with ETI’s which do not possess a proper level of cultural and technological advancement was a concept publicized in space law academia well before Star Trek [36]. Avoiding contamination, influencing cultures and not enforcing contact still remain topic of academic debate. Robots, AIs and an AGI are not avoidable. Humans may refuse to use them, cross the streets when they are around or even access the internet. But they are sharing the same planet (and its orbit, not mentioning the fact that the robots are the only occupants of the Moon and Mars). Humans interact with basic robots and bots on nearly a daily basis. The issue of the limited capacity for subservient bots, self-driving cars, drones or domestic bots is neither the concern for Metalaw nor for AGI law. The AGI relations with humans might suffer from the closeness and accessibility that humans might have while interacting with the AGI and vice versa. And though there might be environmental concerns for both parties, ETI’s distance was the focal point for defining actors of space law.
Space Law was developed as part of the public international law, which aim is to regulate activities of states in some area of activity. Throughout many iterations and concepts, the relations were studied as occurring between nations, governments and proper authorities, not as relations between individual members. Although we might assume that what may be applicable to the governments of Kepplerian Exoplanets or a Macrolife-like colony [37], may be applicable to an entity of consisting of one post singularity gestalt being. However, the post-contact SETI protocols and academic models for frameworks are still based on an authoritative principle.
In case of establishing a contact between Humans and ETI, governments of the Earth are required to establish both a secure frequency for communication, and an authority responsible for maintaining the communication and representing Earth. Smith states that astronauts should be tested for ethical fitness, as they would be the representatives of Humans to ETI [38]. This statement eventually got reworked into the concept of Envoys of mankind [39], although it remains more of a diplomatic status in relations between state parties carrying out activities in outer space and on celestial bodies, than the ambassadors to ETI. AGIs, on the other hand, shall be embedded in the human technosphere, thus relations between them and humans are not going to be easily maintained. During the time that Metalaw was drafted and similar concepts were being discussed in academia or pop culture, the only man-made object that has reached outer space was the V2 rocket, and Sputnik wasn’t built yet. Thus, the notions of “Have spacesuit, will travel” [40] or “space tuckers/miners”[3]making contact or establishing relations with ETI were very much topic of science fiction, and in some lesser extent futurology. But today 55.1% of Earth’s populations has Internet access [41] and those people are not trained professionals in fields of computer science, but everyday users of the World Wide Web. Therefore, the creation or emergence of an AGI would pose a greater problem for governance than the discovery of an ETI. Because of the closeness of computer hardware, and its vulnerability to hacking, there are serious threats of AGI weaponization, hi-jacking or turning it against humans for some malevolent causes. There is a difficulty in creating authorities that shall be recognized by an AGI or AGIs in principle as ones responsible for bilateral diplomatic communications, establishing rules, codes of conduct and resolving any out-coming disagreements. One might see transhumans [42] as proper intermediaries, the shock-wave riders and amplified humans or cyborgs [43] as the envoys, but that concept is better suited for an old science fiction setting, than for a foresight scenario or consideration in law. Therefore, is up to the authorities to create a bilateral or multilateral body for drafting agreements, bylaws and mitigating or remedying harm between the parties. At this point the question the applicability of metalaw to Human-AGI relations remains open. It should be discussed whether the concept is better suited as a body of a legal framework, or simply a philosophical principle behind transhuman policy.
7 The applicability of Metalaw
As previously shown, metalaw was created with the purpose of drafting mutually agreeable rules for peaceful coexistence and cooperation between terrestrial humans and extraterrestrial intelligences. Applying the rules of Metalaw to AGI and its coexistence with humans is not an easy task.
For start, the Interstellar Golden Rule, which is the basis of Haley’s concept of universal Natural law, should be the common denominator among all living things in the universe. Defining life aside, the rule is the most frequently criticized aspect even among the space law academia. Furthermore, the rule itself is imprecise, as well as Haley’s approach to the principle of obedience to the rule is often summarized in his quote: “It is better to destroy humanity than to break the metalaw” [44]. In all certainty, this should not be the idea that human policy makers need to be acquainted with, for it doesn’t correspond with any reason for humanity to ever make a contact with ETI or even AGI in fear of the catastrophic outcomes towards the contacted entities. Ernst Fasan’s approach becomes more reasonable, especially looking at his eleven rules. To better understand them, we must look through them not in the manner they are cited, but the way Fasan described their evolution.
1. Any act which causes damage to another race must be avoided
That is the most Basic iteration of Haley’s Golden Rule. However, it bears much more than the Kantian principle, that reflects much of human philosophy throughout the ages. Haley tends to point out that before preparing to contact, one must make sure that the means of contact (radio, probe, light) will cause no harm to the other party. This principle put in the context of human-AGI relations would comply with the basic standards employed in currently drafted moral and ethical norms for autonomous machines. The basic principle that has been emphasized throughout the discourse of human-robot interactions is avoiding harm or damage to the other party. In this sense, an AGI would need to be careful, so that its actions would not damage human infrastructure, water supplies, the ecosystem, individuals or their property and domestic life forms. The same rule should apply to the human party on a reciprocal basis. Humans need to avoid causing damage to parts of the AGI, it’s power supply, servitor units, communications, etc.
2. Every race has the right to defend itself against every harmful act perpetrated by another race
This rule stems directly from the previous one, and addresses one of the most fear inciting aspects of human-AI relations in general. There is a pop-culture fueled fallacy stating, that an AI shall recognize the evil that humans do and will try to obliterate them in order to ensure its survival. Some stories present it as punishment from Deities, Aliens, Robots or some other beings. Other stories present us the rage of Frankenstein’s monster, the disobedience of creations like Man, Angels, the Golem, the Shoggoth. These are just deeply rooted cultural tropes. Most stories containing those either have the author dressing up as Aliens with super weapons, sentient supercomputers seizing the world’s nuclear arsenal or a finger pointing deity in order to moralize and patronize humanity through the story. Other stories serving similar tropes present a tale of futility of human ingenuity, where even the outcome of creating a Superintelligence can be viewed as the retelling of the story of the Tower of Babel. On the other hand, there is an outstanding prevalence of horror fiction over utopian solar punk visions seen in science novels or hard science fiction in general. But this is a topic for a discussion on literature, culture and psychology, not law. The digression was made to address the anthropocentric elephant in the room.
Humans are afraid of alien intelligences. And not merely ETs but also AIs. There is even the culturally rooted fear of a non-human posing as a human: the uncanny valley [45]. Humans are also afraid of non-humans perpetuating the cruelty of other humans. Though in the previous paragraphs we’ve touched on the notion that humans certainly like to be entertained by robot rebellions or man-eating aliens, real-life possibilities of such encounters are not only very probable as well as not much entertaining.
Humans are concerned with their safety and their anthropomorphic approach to Superintelligence or AI is somewhat superstitious. However, understanding this projection might be helpful when discussing concerns regarding the existence of an AGI. If humans and AGI have both the right to self-defense, thus in the legal sense, both parties stand on equal ground. The discussion over all the possible military operations, feints, tactics and approaches one side might use to leverage its advantage over the other. The point is that granting both sides the right to self-preservation and survival should be seen as mandatory. This stems into the third rule.
3. All intelligent races of the universe have, in principle, equal rights and values
This comes out as a direct extension of the second rule, where both sides have the same right to self-defense and equal rights towards each other. As a legal rule however, it doesn’t reflect the possible power dynamics that make one partner superior to another. It is quite common to view any human-robot relations through the scope of dominance and power dynamics, as it was presented with different incarnations of Asimov’s laws. If the purpose of humans would be to remain superior to other intelligences, then the development of only purpose driven AIs and prohibiting any work regarding to general intelligence in machines might be seen as a reasonable option. Otherwise humans would need to learn to coexist with other intelligences as partners. As such, no party to Metalaw one shall not subjugate the other.
4. Every partner of Metalaw has the right of self-determination
This rule grants humans and AGIs the right to pursue their own goals. It is worth noting that as with other provisions of metalaw, this rule was established for races existing in the same galaxy or solar system, but not quite on the same planet. The purpose of such provision was to provide that ETI and humans would not interfere with others actions, but must be aware not to damage any existing or future values of another race. In the case of two intelligences residing on one celestial body, that might be very difficult and specific rules should be drafted after consultations with the other party. That includes infrastructural, environmental or other projects that would impact either the human society, health [46], the terrestrial or cislunar ecosystem or the access of AGIs to communication and power sources. If a party wishes to isolate itself from other parties, that right should be granted and respected. Self-determination is considered ius cogens in international law, and as such, applies to “nations” with political bodies than individuals. As a form of right to self-determination, the AGI would be granted the right to perform multiplication or separation of its components. As such general changes to its architecture or architectures should be also recognized under the right to self-determination. It is however unknown at this point how would human and AGI parties to metalaw resolve the issue of integrity, if populations of humans decide to opt out of the civilization or the AGIs would establish their own internal arrangements. However, the legal concept must not be mistaken with the Self-Determination Theory, is part of the research in Human-Robot interaction, which implies satisfying human needs of autonomy, competence and relatedness with the appreciation of the robot they have created [47].
5. The principle of preserving one race has priority over the development of the other race
This provision should remain the primary bullet point in discussing human-AGI relations. This rule would ensure both the survival of humans and thriving of both sides. One of those “existential threats” being discussed in academia is an AI takeover. Here, however, both sides must enter into consensual agreement that the wellness and expansion of humanity shall not inhibit the independence, sovereignty, nor the further development of AGIs. By agreeing to this, AGIs must not pursue such development that may result in “assimilating” human beings, cause a grey-goo event or “transform” the biosphere of the Earth to serve its needs. It should be also recognized that actions of both parties might have a serious impact on the ecosystem. Humans are more susceptible environmental disasters than AGIs. There are issues concerning certain radiation levels or vibrations that can cause medical conditions or damage delicate circuits. This could also involve the issue of depleting of natural resources.
John Gertz in his paper on ET Probes suggest adding a ban on Von Neumann replicators [48]. These self-replicating machines might actually pose danger to both the living environment as well as to technological infrastructure and thus should be either outlawed, or treated in the same fashion xenobiological organisms[4] with a safety switch built into them [49]. Bearing this in mind, possible cooperation of humans and AGI in space might be a challenge, as space utilization does not always correspond with space exploration and scientific studies. Let’s assume for the purpose of an example that there are robot rovers working on the lunar surface. Rovers are controlled by an AGI and require a complex system of network relays or lunar orbiting telecom satellites. This however comes into conflict with the possible future farside radiotelescopes, which are being developed by humans. Both parties have to come to mutual agreement on such technologies, as radio-telecommunication from telecomsats might have an impact on radiotelescopes, severely impacting or jeopardizing the purpose and operations of the far-side observatory. Another thing is power broadcasting [50], using high-powered lasers for light sails or thermal rockets, mass driver transports or any form of megaprojects carried out on Planet Earth (Space Elevator, Orbital Rings) and in the solar system (space mining, construction of Jupiter Brains or Shadkov thrusters). Bear in mind, Kardashev scale [51] megaprojects are viewed as a form of a Black Sky thinking among futurologists and transhumanists alike. Thus, they are in some aspect means to development of any future space-faring civilization.
This principle might be the hardest to settle upon in as a general rule. It would require consultations of every major project on a case by case basis. However, one might expect that combining this concept of preservation over development with the provision of self-determination, both sides might find it hard to even consider responsibility for actions. As mentioned before, Metalaw was created as a body of interspecies law, which was founded on the ground work of public international law. In such case, even provisions of public inter-intelligence law shall require parties to resolve the issue of liability and responsibility for actions undertaken by their nationals or otherwise subordinate entities. Such provisions are part of the 1967 Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial bodies, also short titled the Outer Space Treaty. Article VI of the treaty establishes obligatory authorization and supervision over actions of one’s nationals in outer space. States are responsible for their actions and liable for the damage done by their nationals or space objects. However, this analogy does not hold water, as the mechanism was meant to work outside of national jurisdiction, whereas actions of nationals and machines belonging to AGIs will mostly take place inside of national jurisdictions. Thus, there shall be an intersection between territoriality of human governance and the extraterritoriality of AGIs existing within the infrastructure and operating from within via robotic units or interfaces. Even though the rule regarding preservation of one race superior to the development of other needs to be heavily elaborated upon, encompassing codes of conduct and multilateral treaties that would provide both sides with sufficient protection and security. Securing AGIs from human malice should be a priority for human partners to the law, whereas securing humans from any possible negative outcomes of actions performed by the AGIs should be a priority of the AGIs. Whether it be planetary or orbital engineering, both sides might eventually lose if one of them comes into harm.
6. It is not a legal, but an ethical principle that one race should help the other by its own activities
As it is stated, help towards other race should be voluntary, not mandatory. AGIs may feel the need to offer assistance towards humans in case of a natural disaster, but should not be forced into pursuing such action. In reciprocal manner, humans helping AGI in some task should be carrying out the activity through means of contract, not enforcement by the rules of metalaw.
However, this concept was supposed to work for distant worlds, where beaming up energy or transporting cargo takes lots of resources and time, which on the other hand one race may not be in possession of. In the case Biological and Non-biological intelligences inhabiting the same globe, that rule would need to be carried out accordingly. A revised version of this rule should read:
In case of dire circumstances, a partner should help another, under the condition, that it possesses sustainable means to carry out such a task.
This revision arises from the circumstance that inhabiting the same planet both intelligences are interdependent on each other.
7. In case of damage the injurer must restore the integrity of the injured party
This reasonable point needs to be included in the discussions on any form of metalaw based principles on Human-AI relations. It is more or less being included in present day proposals for regulating the liability for damage done to parties by autonomous vehicles or AI.
8. No partner of Metalaw may demand an impossibility
The rule, as Fasan describes it, does not prohibit time travel, faster than light communications or folding space-time. It actually revolves around the one-sided demand of a partner, that would be either a “Catch 22” situation, or the result would inhibit any progress of the other party. The AI/AGI should not ask people to forfeit their existence in order for it to transform the world. But this also can be seen as an exception from rule 7. The AGI can create measures to avoid future harm, rebuild damage property or infrastructure, but it may not be able to raise the dead.
9. No rule ofMetalaw has to be complied with when compliance would result in practical suicide for the obligated race
That is exactly why one must not treat Haley’s overly dramatic approach to metalaw as an imperative. Humans cannot be forced to forfeit their technoculture in order to undo the damage to the environment or be force to issue a form of depopulation, mandatory incapacitation. Same goes with the AGI, which in this case might be forced to “repurpose” itself or “dumb down” in order to meet some energy or computational demands made by the human representatives or the population. In terms of limited energy and problems with climate, actions of one intelligence should not cause harm to other intelligence, as one may not demand the impossible or force one to self-harm in order for others to survive. Nevertheless, such situation would surely be viewed as dire. That ties in perfectly with rule 5, the priority of preserving one race over the advancement of the other. However, this rule would require further provisions established by the parties to Metalaw, as with the concepts mentioned before, if AGIs would require new power sources or climate change would cause the “nuclear winter effect,” rendering ground based solar panels useless due to dusts and clouds shading the Sun, the AGIs might try to create Space Based Power Stations for beaming electric energy down or expand the nuclear infrastructure on Earth. This would be beneficial for both sides in the long run, however, while human society in some forms might survive a similar climate catastrophe of this scale without electricity, the AGIs require it to exist. Here also one race should not be forced into servitude or stagnation on the basis that such action is required by Metalaw. Thus, both sides need to establish their limits and come into terms on the cooperation and reciprocity.
10. Metalegal agreements and treaties must be kept
Pacta Sunt servanda is the basis for human contract law and thus no side should pursue actions that might breach terms of their contract.
11. Every race has a title to its own living space
This one is very problematic in the scope of our scenarios. One cannot agree to scenarios where humans are exiled from Earth[5]. Humankind tends instead to imagine postbiological intelligences performing some feat that is indistinguishable from magic [52], creating megastructures like Dyson swarms or occupying some repurposed asteroid or an artificial celestial object. But outer space aside, living space shall mean something very different to intelligences inhabiting the same globe. AGIs and humans will not only inhabit three dimensional spaces as bodies and hardware, but they will use the same energy and communication infrastructure, water supply, atmosphere and radio frequencies, or even orbit spectrum. In this case both intelligences should enter into a mutual agreement on the use of those commons, and how to avoid both the tragedy of the commons and the tragedy of anti-commons.
G. Harry Stine created his own form Metalegal rules, called Canons which expand several aspects of Fasan’s metalaw. Explicitly Stine invokes the rules of non-violent conflict resolution, the consensual aspect of bilateral communication, and the ideas for zones of sensitivity [53]. Thus, the reworking of Metalaw needs to requires the expansion of several concepts, such as those Zones of Sensitivity. In the case of human-AGI relations this will be a rather difficult subject to resolve legally and technically as mentioned before. However, the addition of the nonviolent conflict resolution should be the actual foundation for other rules of any future metalaw, be it to be created between humans and ETI or AI.
Is it has been presented that metalaw can be applied accordingly to human AGI relations, with certain adjustments in the moral and ethical code that would be mutually acceptable. Furthermore, technological and systemic safeguards must be put in place if such rules are supposed to be applied. AGIs should not let other members of their specie or subservient intelligence go “rogue,” and so do humans providing no individual or group would deliberately hack or attack the infrastructure belonging or essential to the AGI. And that might be the most difficult task to perform in order to safeguard the agreement and not end up in a disaster.
8 Further thoughts
The main problem of metalaw, as stated is that its purpose was to regulate relations between races that occupy different celestial bodies. It was from the Metalaw and other similar concepts that the principle of non-appropriation, that has been codified in Article II of the Outer Space Treaty, was born. However, those principles were purposed for contact with civilizations, even though there were disagreements as to whether treat post-singularity gestalt hive minds or swarm intelligences as capable of being partner to Metalaw. This assertion is different from the human approach to AGI and what kind of robots and AI do humans qualify as capable of possessing the capacity for contact and legal personhood. Holding AIs and robots to “higher standards” than possible alien life forms comes from the fact that it is humans who design and manufacture those machines, write or train those programs.
The first problem will be assessing sentience of AIs, as the burden of proof falls on them. It is the AI that must prove sentience and consciousness in order to receive legal protection in the light of human-made law. Alien life forms are mostly considered a wonder, a marvel of nature in a similar fashion of complex life on Earth. However, it is the inherent anthropocentrism that requires non-human animals or AIs to prove their capacity to possess the status of equals to humans. This is mostly due to the territorial nature of human beings, that tends to view intelligent non-humans as utilities.
In this case Metalaw seems detached from terrestrial reality. In its purpose, creating mutually acceptable rules between space-faring civilizations was both noble and reflected the post-World War II spirit of seeking international cooperation. While some provisions, like the rule of preservation over development, rule of avoiding suicide, rule of avoiding harm and right to self-defense can provide the basis for can have some sense of applicability to the human-AGI relations, they seem lacking. Those provisions lack answers to many questions that may rise or do not address some issues that are elaborated upon in current proposed regulations on AIs. Metalawputs an emphasis on cooperation, however it doesn’t address cases like “war on terror,” where humans can try to convince an AGI to help them track down or eliminate designated targets or humans suspected of a criminal activity. Bearing in mind that criminal law does not always reflect universal ethics, and some paragraphs may criminalize religious, ideological offences. This might be the case in jurisdictions where homosexuality is a criminal act, whereas in other jurisdictions the same class of people are free to enter into marital unions and raise children. Metalaw doesn’t also address the actions of splinter groups or “rogue elements”, which are common place in human reality. Neither does it address the involvement of AGIs in human armed conflict, even as computing power, analysis or hardware provider. Metalaw does not address the issue of cyborgs or other forms of permanent symbiosis between members of both races. In case of creations or emergence of an AGI, both sides need to pay attention to their agreement. There is a need to address the topics like cooperation in neutralizing rogue elements, the equilibrium between privacy and transparency or how will both parties operate under present or a new economy. Thus, Metalaw may only serve as guidance, and not as a template
9 Conclusions
Metalaw was created to establish a form of a common set of rules that would be applicable in outer space, especially in relations with Extraterrestrial intelligences. Its purpose was to secure the peaceful coexistence and be a basis for further acts of law that would be established between Humans and other space-faring civilizations. Even though several scholars have elaborated that an intelligence or an envoy that humans might establish contact with might be an intelligent robot, the rules of metalaw were formed on the basis of two legal concepts. Natural law and public international law. As such, Metalaw was addressing the relations between whole civilizations, and not their individual members. AI laws on the other hand address the issues of responsibility, liability, and safety. This is due to the presence of AIs and robots in the same space as humans. Therefore, the approach taken in proposed AI and robot regulations have a different emphasis and strike different balances. Although the majority of present day AI law concepts revolve around human safety, there are several approaches that address AGIs and Strong AIs as members of the human society. In this case however the rules provided by Haley’s or Fasan’s Metalaw are difficult to apply to the present or forthcoming AI/AGI regulations. The main reason is that Metalaw requires governing bodies on both sides that would come into agreement on the provisions governing their mutual relations. While it is not unreasonable that AGIs, being stakeholders in their respective regulatory niches, it would be easier for them to take part in human regulatory processes on the basis of “robotic legal persons” than on the basis of separate political entities. Thus, metalegal concepts and provisions would better serve as a philosophy for regulatory guidelines and ethical frameworks, than as regulations themselves.
References
[1] A. G. Haley, “Space Law and Metalaw – A synoptic view,” in Proceedings of the Seventh International Astronautical Congress Rome: Associazione Italiana Razzi, 1956, pp. 435–450.Search in Google Scholar
[2] A. G. Haley, “Space Law and Metalaw – Jurisdiction Defined,” J. Air Law Commerce vol. 24, no. 3, pp. 286–303, 1957.Search in Google Scholar
[3] E. Fasan, Relations with Alien Intelligences Berlin: Berlin Verlag, 1970.Search in Google Scholar
[4] W. Duch, “Architektury kognitywne, czyli jak zbudować sztuczny umysł,” in Neurocybernetyka teoretyczna R. Tadeusiewicz (Ed.), Warsaw: Wydawnictwo Uniwersytetu Warszawskiego, 2009, pp. 329–361.10.31338/uw.9788323540274.pp.271-304Search in Google Scholar
[5] J. McCarthy, “Who coined the term, defined it as: The science and engineering of making intelligent machines," What is Artificial Intelligence, Stanford, 2007. Accessed on: December 5, 2019 [Online]. http://www-formal.stanford.edu/jmc/whatisai.pdfSearch in Google Scholar
[6] European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). Accessed on: December 5, 2019 [Online]. http://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.htmlSearch in Google Scholar
[7] D. Woods, “Want Responsible Robotics? Start with Responsible Humans,” Ohio State University News July 28, 2009. Accessed on: December 5, 2019 [Online]. https://news.osu.edu/want-responsible-robotics--start-with-responsible-humans/Search in Google Scholar
[8] Engineering and Physical Sciences Research Council, “Principles of Robotics: Regulating Robots in the Real World,” UK Research and Innovation November 28, 2018. Accessed on: December 5, 2019 [Online]. https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/Search in Google Scholar
[9] S. Nadella, “The Partnership of the Future,” Slate 2016. Accessed on: December 5, 2019 [Online]. https://slate.com/technology/2016/06/microsoft-ceo-satya-nadella-humans-and-a-i-can-work-together-to-solve-societys-challenges.htmlSearch in Google Scholar
[10] Institute of Physics, “Nanotechnology Pioneer Slays ’Grey Goo’ Myths,” ScienceDaily June 04, 2004. Accessed on: December 5, 2019 [Online]. www.sciencedaily.com/releases/2004/06/040609072100.htmSearch in Google Scholar
[11] N. Bostrom, “Ethical issues in advanced artificial intelligence,” in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence vol. 2, I. Smit et al. (Eds.), Tecumseh, Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17.Search in Google Scholar
[12] M. Minsky, “Conscious machines,” in Machinery of Consciousness Proceedings, National Research Council of Canada, 75th Anniversary Symposium on Science in Society, Ottawa, June 1991. Accessed on: December 5, 2019 [Online]. http://www.aurellem.org/6.868/resources/conscious-machines.htmlSearch in Google Scholar
[13] https://www.humanbrainproject.eu/en/Search in Google Scholar
[14] http://openworm.org/Search in Google Scholar
[15] R. A. Koene, “Fundamentals of whole brain emulation: state, transition and update representations,” International Journal of Machine Consciousness, vol. 4, no. 1, pp. 5–21, 2012.10.1142/S179384301240001XSearch in Google Scholar
[16] G. M. Martin, “On immortality: an interim solution,” in Perspectives in Biology and Medicine vol. 14, no. 2, pp. 339–340, 1971.10.1353/pbm.1971.0015Search in Google Scholar
[17] M. Rothblatt, “H+: from mind loading to mind cloning: gene to meme to beme: a perspective on the nature of humanity,” Metanexus February 5, 2009. Accessed on: December 5, 2019 [Online]. https://metanexus.net/h-mind-loading-mind-cloning-gene-meme-beme-perspective-nature-humanity/Search in Google Scholar
[18] C. Pennachin and B. Goertzel, “Contemporary approaches to artificial general intelligence,” in Artificial General Intelligence C. Pennachin and B. Goertzel (Eds.), Heidelberg: Springer-Verlag, 2007, pp. 1–30.10.1007/978-3-540-68677-4Search in Google Scholar
[19] S. Pissanetzky, Emergent inference, or how can a program become a self-programming AGI system? Icelandic Institute for Intelligent Machines, May 23, 2011. Accessed on: December 5, 2019 [Online] http://www.iiim.is/wp/wp-content/uploads/2011/05/pissanetzky-agisp-2011.pdfSearch in Google Scholar
[20] J. R. Searle, “Minds, brains, and programs,” Behav. Brain Sci. vol. 3, no. 3, pp. 417–457, 1980.10.1017/S0140525X00005756Search in Google Scholar
[21] I. J. Good, “Speculations concerning the first ultraintelligent machine,” in Advances in Computers vol. 6, F. Alt and M. Ruminoff (Eds.), Academic Press, Chilton, 1965, pp. 31–88.10.1016/S0065-2458(08)60418-0Search in Google Scholar
[22] M. Coeckelbergh, “Virtual moral agency, virtual moral responsibility: on the moral significance of appearance, perception, and performance of artificial agents,” in AI and Society, vol. 24, no. 1, 181–189, 2009. Accessed on: December 5, 2019 [Online] https://doi.org/10.1007/s00146-009-0208-310.1007/s00146-009-0208-3Search in Google Scholar
[23] E. Yudkowsky, “Levels of organization in general intelligence,” in Artificial General Intelligence. Cognitive Technologies B. Goertzel and C. Pennachin (Eds.), Heidelberg: Springer-Verlag, 2007, pp. 389–501.10.1007/978-3-540-68677-4_12Search in Google Scholar
[24] V. Vinge, The Coming Technological Singularity: How to Survive in the Post-human Era, NASA NTRS Archive, December 1, 1993. Accessed on: December 5, 2019 [Online] http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19940022856.pdf10.5040/9781474248655.0037Search in Google Scholar
[25] G. Buttazzo, “Artificial consciousness: utopia or real possibility?” Computer vol. 34, no. 7, pp. 24–30, July 2001.10.1109/2.933500Search in Google Scholar
[26] G. Hallevy, “The criminal liability of artificial intelligence entities – from science fiction to legal social control,” Akron Intellectual Property Journal vol. 4, no. 2, pp. 171–201, 2010.Search in Google Scholar
[27] R. Rucker, Infinity and the Mind: Science and Philosophy of the Infinite Boston: Birkhäuser, 1982, pp. 181–182.Search in Google Scholar
[28] J. P. Sullins, “When is a robot a moral agent?” in Machine Ethics M. Anderson and S. L. Anderson (Eds.), Cambridge: Cambridge University Press, 2011, pp. 151–161.10.1017/CBO9780511978036.013Search in Google Scholar
[29] A.G. Haley, Space Law and Government New York: Appleton-Century-Crofts, 1963, p. 413.Search in Google Scholar
[30] J. E. Faria, “Draft to an international covenant for outer space,” in Proceedings of the 11th International Astronautical Congress vol. 3, Stockholm: Organizing Committee of the Congress, 1961, 123–125.Search in Google Scholar
[31] J. G. Verplaetse, International Law in Vertical Space New York: Rothman, 1960, 161–162.Search in Google Scholar
[32] R. K. Woetzel, “Sovereignty and national rights in outer space and on celestial bodies,” in Proceedings of the Fifth Colloquium on the Law of Outer Space A. G. Haley (Ed.), Washington DC: International Institute of Space Law, 1963, p. 31.Search in Google Scholar
[33] A. Korbitz, “Altruism, Metalaw, and Celegistics: an extraterrestrial perspective on universal law-making,” in Extraterrestrial altruism: evolution and ethics in the cosmos D. A Vakoch (Ed.), Heidelberg: Springer-Verlag, 2014, pp. 231–247.10.1007/978-3-642-37750-1_15Search in Google Scholar
[34] S. Dick, “The postbiological universe,” Acta Astronaut. vol. 62, no. 8-9, pp. 499–504, April 2008.10.2514/6.IAC-06-A4.2.01Search in Google Scholar
[35] B. Vukotić and M. M. Ćirković, “Astrobiological complexity with probabilistic cellular automata,” Orig. Life Evol. Biosph. vol. 42, no. 4, pp. 347–371, August 2012.10.1007/s11084-012-9293-2Search in Google Scholar PubMed
[36] E. C. Miller, “Ethics and space travel,” Spaceflight (Lond.) vol. 4, no. 4, p. 139, July 1962.Search in Google Scholar
[37] D. M. Cole, “Extraterrestrial colonies,” Navigation vol. 7, no. 7, pp. 83–98, Summer-Autumn 1960.10.1002/j.2161-4296.1960.tb02444.xSearch in Google Scholar
[38] J. L. Smith, “Extraterrestrial life,” JBIS J. Br. Interplanet. Soc. vol. 19, no. 10, p. 447, July/August 1964.Search in Google Scholar
[39] G. S. Robinson, Envoys of Mankind: A Declaration of First Principles for the Governance of Space Societies Washington DC: Smithsonian Institution Press; 1st edition, November 1986, p. 198.Search in Google Scholar
[40] R. A. Heinlein, Have Space Suit-Will Travel New York: Charles Scribner’s sons, 1958.Search in Google Scholar
[41] "World internet users statistics and 2016 world population stats". https://www.internetworldstats.com/stats.htm Compare with the 2015 ITU Report. Accessed on: December 5, 2019 [Online] https://www.itu.int/en/ITU-D/Statistics/Documents/facts/ICTFactsFigures2015.pdfSearch in Google Scholar
[42] G. S. Robinson, “What does philosophy do for space jurisprudence and implementing space law? Secular humanism and space migration essential for survival of humankind species and its “essence”, Mcgill Occasional Paper, no. XIX, November 2016, Accessed on: December 5, 2019 [Online]. https://www.mcgill.ca/iasl/files/iasl/space_jurisprudence_and_the_need_for_a_transglobal_cybernation.pdfSearch in Google Scholar
[43] G. S. Robinson, “The prospect of interspecies cybernetic communication between humankind and post-humans designed and created for space exploration and space settlement,” Journal of Space Philosophy vol. 6, no. 1, 2017. Accessed on: December 5, 2019 [Online]. http://keplerspaceinstitute.com/wp-content/uploads/2017/10/JSP-Fall-2017-10_Robinson-Final.pdfSearch in Google Scholar
[44] K. Schutte, Die Weltraumfahrt hat begonnen Freiburg im Breisgau: Herder, 1958, pp. 14; after Ernst Fasan, in Private Law, Public Law, Metalaw and Public Policy in Space P. M. Sterns and L. I. Tennen (Eds.), Heidelberg: Springer-Verlag, 2016, pp. 216. Available: https://doc1.bibliothek.li/aay/A022838.pdfSearch in Google Scholar
[45] K. F. MacDorman and H. Ishiguro, “The uncanny advantage of using androids in social and cognitive science research,” Interact. Stud. vol. 7, no. 3, pp. 297–337, 2006.10.1075/is.7.3.03macSearch in Google Scholar
[46] Position on Robotics and Artificial Intelligence Proposal of the Green Digital Working Group. Accessed on: December 5, 2019 [Online]. https://juliareda.eu/wp-content/uploads/2017/02/Green-Digital-Working-Group-Position-on-Robotics-and-Artificial-Intelligence-2016-11-22.pdfSearch in Google Scholar
[47] L. Huang “Qualitative analysis of the application of self-determination theory in robotics tournaments,” in, HRI ’17: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, New York: Association for Computing Machinery, 2017, pp. 135–136.10.1145/3029798.3038342Search in Google Scholar
[48] J. Gertz, ET Probes, Nodes, and Landbases: A Proposed Galactic Communications Architecture and Implied Search Strategies Available: https://arxiv.org/abs/1808.07024Search in Google Scholar
[49] O. Wright, G. B. Stan, and T. Ellis, “Building-in biosafety for synthetic biology,” in Microbiology vol. 159, no. 7, pp. 1221–1235, July 2013. Accessed on: December 5, 2019 [Online]. DOI 10.1099/mic.0.066308-0.10.1099/mic.0.066308-0Search in Google Scholar PubMed
[50] G. A. Landis, Reinventing the Solar Power Satellite – NASA/TM-2004-212743, February 2004. Accessed on: December 5, 2019 [Online]. https://space.nss.org/media/2004-NASA-Reinventing-The-Solar-Power-Satellite.pdfSearch in Google Scholar
[51] N. S. Kardashev, “On the inevitability and the possible structures of supercivilizations,” in The search for extraterrestrial life: Recent developments; Proceedings of the 112th Symposium of the International Astronomical Union Held at Boston University (Boston, Mass., U.S.A.), M.D. Papagianni (Ed.), Dordrecht: Springer Netherlands, 1985, pp. 497–504.Search in Google Scholar
[52] A. C. Clarke, Profiles of the Future: An Enquiry into the Limits of the Possible New York: Harper and Row, 1962, pp. 14–36.Search in Google Scholar
[53] G. H. Stine, “How to get along with an extraterrestrial... or your neighbor,” Analog Science Fiction/Science Fact, vol. 100, no. 2, pp. 39–47, February 1980.Search in Google Scholar
© 2020 Kamil Muzyka, published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.