The 23 Asilomar Principles And Why They Matter, According To Stephen Hawking and Elon Musk

The advancement of Artificial Intelligence (AI) brings with it concerns nearly as old as industry itself: the first rejection of a patent for a machine that would decrease human labor was in the 16th century.

Elizabeth I turned down William Lee’s patent request for a rudimentary knitting “machine”, known as a stocking frame, due to her concerns that it would put hand-knitters out of work. Increased pressures for more automation during the Industrial Age, however, would assure that the stocking frame -which was eventually granted its patent- would be only the beginning of the struggle between humans and robots. Artificial Intelligence, for all its theoretical benefits, has always been met with a degree of skepticism. Only recently, however, has it grown to a level of concern that cosmologist Stephen Hawking and Tesla CEO Elon Musk would be requested to weigh in on the governance of AI/human interactivity.

As machines are becoming increasingly intelligent and more highly capable of doing tasks once thought only humans could complete, however, it’s increasingly important to consider the laws that must govern their behavior. Asimov’s Three Laws of Robotics provided an excellent foundation when they were initially developed; the proliferation of interaction between humans and AI in recent years demands increased regulation with increased sensitivity -to both humans and AI. And recently, guidelines  were developed that were so universally acceptable that thinkers as diverse as Elon Musk and Stephen Hawking endorsed them.

These guidelines number 23, not merely 3, and came about after several days of fierce debate between many of the world’s foremost scientists, philosophers, economists and experts in various other fields at the Future of Life Institute’s Beneficial AI Conference 2017. (Musk himself donated $10 million to the Future of Life Institute in 2015.) In order for a guideline to make the list -officially known as the Asilomar AI Principles– 90% of the attendees had to agree on it. The Principles are divided into groups: Research Issues, Ethics and Values, and Longer Term Issues. Concerns such as privacy, security, weapons control, judicial transparency and more aim to set out a framework for which AI and humans can continue to towards a mutually beneficial, mutually harmonious future.

So, wait, why is this important? Well, most people think that it’s important to avoid a Skynet-type scenario, but if you consider how broadly-reaching and swiftly-growing AI technologies have already progressed, it makes sense to consider this type of framework as fairly essential at this point. The recent uproar over elimination of drivers by the implementation of self-driving cars, the increased attention being given the idea of a Guaranteed Basic Income because of the concerns that automation will decrease workforces so significantly that people may literally not be able to work for a living, even the frequency that Facebook’s often-contentious privacy policies make headlines around the world are all issues that are happening right now, all demonstrate the extent to which our lives are already enmeshed with AI.

Elon Musk himself is quoted as saying, “Here are all these leading AI researchers saying that AI safety is important. I agree with them.” Hawking has stated that “The development of full artificial intelligence could spell the end of the human race.”

The adoption of the Asilomar Principles is essential to the continued harmonious growth and development of AI, in a fundamentally human world.

The complete list of Asilomar A.I. Principles:

1. Research Goal: The goal of A.I. research should be to create not undirected intelligence, but beneficial intelligence.

2. Research Funding: Investments in A.I. should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future A.I. systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with A.I., and to manage the risks associated with A.I.?
  • What set of values should A.I. be aligned with, and what legal and ethical status should it have?

3. Science-Policy Link: There should be constructive and healthy exchange between A.I. researchers and policy-makers.

4. Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of A.I.

5. Race Avoidance: Teams developing A.I. systems should actively cooperate to avoid corner-cutting on safety standards.

6. Safety: A.I. systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7. Failure Transparency: If an A.I. system causes harm, it should be possible to ascertain why.

8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9. Responsibility: Designers and builders of advanced A.I. systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10. Value Alignment: Highly autonomous A.I. systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11. Human Values: A.I. systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12. Personal Privacy: People should have the right to access, manage and control the data they generate, given A.I. systems power to analyze and utilize that data.

13. Liberty and Privacy: The application of A.I. to personal data must not unreasonably curtail people’s real or perceived liberty.

14 Shared Benefit: A.I. technologies should benefit and empower as many people as possible.

15. Shared Prosperity: The economic prosperity created by A.I.I should be shared broadly, to benefit all of humanity.

16. Human Control: Humans should choose how and whether to delegate decisions to A.I. systems, to accomplish human-chosen objectives.

17. Non-subversion: The power conferred by control of highly advanced A.I. systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18. A.I. Arms Race: An arms race in lethal autonomous weapons should be avoided.

19. Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future A.I. capabilities.

20. Importance: Advanced A.I. could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21. Risks: Risks posed by A.I. systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22. Recursive Self-Improvement: A.I. systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23. Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

buy metronidazole online