Online tech learner logo
Online Tech Learner
  • Please enable News ticker from the theme option Panel to display Post

AVOIDING THE PRECIPICE. Race Avoidance in the Development of… | by AI Roadmap Institute | AI Roadmap Institute Blog

AVOIDING THE PRECIPICE. Race Avoidance in the Development of… | by AI Roadmap Institute | AI Roadmap Institute Blog

[ad_1]

During the workshop, a number of important issues were raised. For example, the need to distinguish different time-scales for which roadmaps can be created, and different viewpoints (good/bad scenario, different actor viewpoints, etc.)

Timescale issue

Roadmapping is frequently a subjective endeavor and hence multiple approaches towards building roadmaps exist. One of the first issues that was encountered during the workshop was with respect to time variance. A roadmap created with near-term milestones in mind will be significantly different from long-term roadmaps, nevertheless both timelines are interdependent. Rather than taking an explicit view on short-/long-term roadmaps, it might be beneficial considering these probabilistically. For example, what roadmap can be built, if there was a 25% chance of general AI being developed within the next 15 years and 75% chance of achieving this goal in 15–400 years?

Considering the AI race at different temporal scales is likely to bring about different aspects which should be focused on. For instance, each actor might anticipate different speed of reaching the first general AI system. This can have a significant impact on the creation of a roadmap and needs to be incorporated in a meaningful and robust way. For example, the Boy Who Cried Wolf situation can decrease the established trust between actors and weaken ties between developers, safety researchers, and investors. This in turn could result in the decrease of belief in developing the first general AI system at the appropriate time. For example, a low belief of fast AGI arrival could result in miscalculating the risks of unsafe AGI deployment by a rogue actor.

Furthermore, two apparent time “chunks” have been identified that also result in significantly different problems that need to be solved. Pre- and Post-AGI era, i.e. before the first general AI is developed, compared to the scenario after someone is in possession of such a technology.

In the workshop, the discussion focused primarily on the pre-AGI era as the AI race avoidance should be a preventative, rather than a curative effort. The first example roadmap (figure 1) presented here covers the pre-AGI era, while the second roadmap (figure 2), created by GoodAI prior to the workshop, focuses on the time around AGI creation.

Viewpoint issue

We have identified an extensive (but not exhaustive) list of actors that might participate in the AI race, actions taken by them and by others, as well as the environment in which the race takes place, and states in between which the entire process transitions. Table 1 outlines the identified constituents. Roadmapping the same problem from various viewpoints can help reveal new scenarios and risks.

original document

Modelling and investigating decision dilemmas of various actors frequently led to the fact that cooperation could proliferate applications of AI safety measures and lessen the severity of race dynamics.

Cooperation issue

Cooperation among the many actors, and spirit of trust and cooperation in general, is likely to decrease the race dynamics in the overall system. Starting with a low-stake cooperation among different actors, such as talent co-development or collaboration among safety researchers and industry, should allow for incremental building of trust and better understanding of faced issues.

Active cooperation between safety experts and AI industry leaders, including cooperation between different AI developing companies on the questions of AI safety, for example, is likely to result in closer ties and in a positive information propagation up the chain, leading all the way to regulatory levels. Hands-on approach to safety research with working prototypes is likely to yield better results than theoretical-only argumentation.

One area that needs further investigation in this regard are forms of cooperation that might seem intuitive, but might rather reduce the safety of AI development [1].

It is natural that any sensible developer would want to prevent their AI system from causing harm to its creator and humanity, whether it is a narrow AI or a general AI system. In case of a malignant actor, there is presumably a motivation at least not to harm themselves.

When considering various incentives for safety-focused development, we need to find a robust incentive (or a combination of such) that would push even unknown actors towards beneficial A(G)I, or at least an A(G)I that can be controlled [6].

Tying timescale and cooperation issues together

In order to prevent a negative scenario from happening, it should be beneficial to tie the different time-horizons (anticipated speed of AGI’s arrival) and cooperation together. Concrete problems in AI safety (interpretability, bias-avoidance, etc.) [7] are examples of practically relevant issues that need to be dealt with immediately and collectively. At the same time, the very same issues are related to the presumably longer horizon of AGI development. Pointing out such concerns can promote AI safety cooperation between various developers irrespective of their predicted horizon of AGI creation.

Forms of cooperation that maximize AI safety practice

Encouraging the AI community to discuss and attempt to solve issues such as AI race is necessary, however it might not be sufficient. We need to find better and stronger incentives to involve actors from a wider spectrum that go beyond actors traditionally associated with developing AI systems. Cooperation can be fostered through many scenarios, such as:

  • AI safety research is done openly and transparently,
  • Access to safety research is free and anonymous: anyone can be assisted and can draw upon the knowledge base without the need to disclose themselves or what they are working on, and without fear of losing a competitive edge (a kind of “AI safety helpline”),
  • Alliances are inclusive towards new members,
  • New members are allowed and encouraged to enter global cooperation programs and alliances gradually, which should foster robust trust building and minimize burden on all parties involved. An example of gradual inclusion in an alliance or a cooperation program is to start cooperating on low-stake issues from economic competition point of view, as noted above.

In this post we have outlined our first steps on tackling the AI race. We welcome you to join in the discussion and help us to gradually come up with ways how to minimize the danger of converging to a state in which this could be an issue.

The AI Roadmap Institute will continue to work on AI race roadmapping, identifying further actors, recognizing yet unseen perspectives, time scales and horizons, and searching for risk mitigation scenarios. We will continue to organize workshops to discuss these ideas and publish roadmaps that we create. Eventually we will help build and launch the AI Race Avoidance round of the General AI Challenge. Our aim is to engage the wider research community and to provide it with a sound background to maximize the possibility of solving this difficult problem.

Stay tuned, or even better, join in now.

[ad_2]

Source link

administrator

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *