Picture
DR. ROMAN YAMPOLSKIY

Assistant Professor, Director Cybersecurity Lab
University of Louisville, KY
Memberships of Professional Societies:
ACM, IEEE, Tau Beta Pi
Dr. Yampolskiy is an author of over 100 publications including multiple journal articles and books.

‘I am the slave of the lamp’ -- Genie from Aladdin (from Roman V. Yampolskiy paper “Leakproofing the Singularity”, 2011)


1.  Congratulations on writing your new book “Artificial Superintelligence: A Futuristic Approach” to be completed in January 2014. In your widely publicized 2011 paper “Leakproofing the Singularity (“http://cecs.louisville.edu/ry/LeakproofingtheSingularity.pdf), you have offered to create an AI confinement environment, and even call it a “JAIL” – “Just for AI Location,” which should be marked with a Hazard symbol analogous to “Bio-Hazard,” “Radiation,” and “Magnetic Field”. Why do you argue against the idea that after extensive testing in different confinement environments confirming the AI is ‘Friendly’ (Yudkowsky, 2001) it should be released (Chalmers, 2010)? According to Ray Kurzweil (2005), singularity means that human and artificial intellect merge. So does your AI confinement argument mean that you do not expect singularity to happen?  And does it mean that you do not agree with Ray Kurzweil that we need to merge with machines in order to survive the singularity?

AI should never be released because you can never actually confirm that it is friendly, it may simply pretend to be so, until it gains its freedom. A standard definition of singularity is that machines learn to produce the next generation of even smarter machines and that process speeds up to the point of being beyond prediction or understanding. Kurzweil argues that machines and humans will merge and so that will allow us to keep up with accelerating change. I do fully expect singularity to happen, but merging with machines by definition means the end of humanity, we will stop existing as humans and will become machines. Regardless of how you feel about it, you have to agree that it is not a way for humanity to survive singularity.

2.  You proposed Artimetrics as a new field of study for the issues of AI and singularity. Artimetrics should identify, classify and authenticate AI agents, robots, and virtual reality avatars for security purposes (Yampolskiy, 2007; Yampolskiy & Govindaraju, 2008; 2007b, Gavrilova and Yampolskiy, 2010). (209). What are the achievements in Artimetrics so far? What exactly is the AI confinement protocol you propose?

Artimetrics is adaptation of behavioral and physical biometric profiling to virtual worlds and artificial agents. It is a way to track and identify AI software and avatars. It is something practical we can do in the field of AI security before we have actual human level AI developed. As of today we can do facial recognition of avatars and linguistic profiling of chatbots, but it is still an ongoing research project. We are currently working on better ways of telling bots from humans and vise versa.

Proposed confinement protocol is somewhat technical but to explain it simply it is a way to limit AI’s communication channel in such a way that it can’t sneak any information in or out without being authorized to do so, limiting AI’s impact on the real world.

3.  In your paper “Leakproofing the Singularity,” you proposed the following Secured Communication Protocol with AI: Limited Inputs and Limited Outputs.  You have also admitted that besides the DARPA route, Google’s AI seems to be the most likely way towards Superintelligence. Can we in principle confine AI if AI is disseminated in the worldwide internet?  It seems that to be truly useful, AI should have access and, so, a database, which exceeds the capacity of one human, a group of humans, or even the entire state and humankind. How can we make Google’s Oracle’s inputs and outputs limited, if its very “superintelligent” status depends upon the access to the unlimited databases? If we use your confinement protocol, based upon limited inputs and limited outputs, can we truly benefit from AI?

Google is always developing new versions of its search algorithm and before making them public they are tested in a restricted environment. So if Google wanted to confine its AI they could. The protocol is flexible enough to allow unlimited input but still limit output so we can benefit from useful information discovered by AI without it having unrestricted influence over us. How beneficial the AI system is does depend on its freedom to communicate, and so we need to find a balance between freedom of communication and security. This is very similar to the problem we face as a nation in choosing between our freedoms and security against terrorism.

4.  Do you believe that the worldwide web is secure? What would you say, if you are confronted with the concern that every country should have its own national web with its own AI confinement protocol?  Is it possible in principle to confine AI in only one box for the entire planet Earth?

The web is not secure, it was not designed with security in mind. Some countries are starting to develop national sub-internets to exercise better control over the available content. Will same thing happen with AI projects? Possibly. So as many countries/corporations begin to get closer to true AI we will probably see a number of AI boxes set up for testing and further development.

5.  In your paper, you say: “Superintelligence is not omnipotent; it is a piece of software capable of looking at all the options it has in a given situation and properly evaluating probabilities for each option” (210). Does this assessment undermine Ray Kurzweil’s utopist paradigm of AI, which is capable of becoming the superior (the highest) intelligence in the universe? At the same time, in the same paper, you declare that “Human intellect is no match to that of a superintelligent machine” (204) and is “the best the humanity can do” (206), in support of Ray Kurzweil’s promise that AI will get for us the radical life extension and expansion. So what is your ultimate judgment: can self-conscious machines become the highest intelligence in the universe only because they have superior computational abilities?

The short answer is YES. In my opinion superior computational capabilities are sufficient for machines to become a dominant intelligence as we have seen in many restricted domains such as chess. However superior does not mean omnipotent.

6.  What would you say if you are confronted with the concern that the “superintelligent” status of Google’s AI is a false flag, and essentially, it would be simply the mechanism of manipulating the information, including financial markets through the high-frequency trading, and your confinement protocol would serve in this case as the mechanism of the ultimate alienation of information and ability from those internet users who are denied the access to the AI?

If you mean Google’s search engine, I would say it makes no sense. If the develop system that is not intelligent than restricting access to it is completely trivial. If Google wants to limit access to its services to some users they can do so in a very direct way, no need for any confinement protocols.

7.  What would you say if you are confronted with the concern that Ray Kurzweil is mistaken about the nature of intelligence. Wavefunction constitutes 96% of reality, and particles constitute only 4%. Some wavefunctions (called scalar waves) are characterized by quantum non-locality in the infinite space and time, meaning not only that they are indestructible, but also that they are uncreated. In other words, these waves coincide with infinity. If we extend quantum physics to humans, and view man as a wavefunction with the scalar waves component, then, we need to account for this uncreated and indestructible wave component in humans.  On the contrary, the artificial intelligence (AI) is created and so finite, in virtue of definition. Then it seems that no AI would ever be able to supersede or even reproduce a human, because the scalar segment of wavefunction in the human intellect would be beyond its ability to even detect. Even if we use human DNA to create biorobots as DNA computers, the AI will never equal the human access to the infinity simply because the AI will forever lack this uncreated wave module.

WHY? Prove it!  Assuming that your claims about wave functions, quantum effects, etc., are true, machines which are potentially made from the same physical matter as humans would have just as much access to those properties and so would have no problem matching or exciding human performance. “…A machine is created, and, so, is finite.” Just like a human.  I never heard of wavefunctions or scalar waves, etc., but if you can “extend quantum physics to humans,” you can just as easily extend quantum physics to robots achieving the same level of performance.

8.  Still, if all this is true in regards biorobots as DNA computers, what would you say if you are confronted with the concern that the “superintelligent” status of Google’s AI is a false flag, so that Google’s AI, Geneva’s Financial AI and cyborgs are simply the variant of the depopulation program and the continuation of MK-Ultra program to produce mind-slaves?

I don’t understand how the claim that you can’t build a true AI system implies a program to control human mind.

9.  Since the publication of your paper, IBM, Intel, Cisco, Google and GE made it public that that they are preparing a worldwide launch of the “Internet of Things” or “Internet of Everything”. In essence, “Internet of Things” means the following: things acquire a way of interacting with each other without the participation of humans, or, in other words, machines become capable of communicating without human involvement. Things “communicate” between each other through one central AI, which is placed above both things and humans. Cisco advertising of “The Internet Things” has gone live on Channel 63 Headlines News in January 2013. In mid-December 2012, Google has hired Ray Kurzweil as its Director of Engineering. At the end of January 2013, it was made public that Mr. William Ruh, GE software research vice president, is overseeing the investment of $1.5bn to build "Industrial Internet" as the GE sector of the “Internet of Things” in the UK. AI in the “Internet of Everything” is clearly more widespread and has more control over the human reality than Google AI, financial AI, military AI, or industrial AI each taken on its own. AI in the “Internet of Everything” is clearly all these things combined. It is a global control by AI. How your AI confinement protocol would address the “Internet of Everything”?

http://www.v3.co.uk/v3-uk/news/2237727/general-electric-embarks-on-internet-of-things-uk-recruitment-drive

The protocol is designed to be used by the AI developers. If AI emerges from distributed contributions of different internet companies, it would be impossible to contain such AI much like it is impossible to contain information from Internet.

10. You book is about ways to make sure that an AI we construct is safe and beneficial to humanity, how does this compare to dozens of books about AI published every year?

As far as I know AI books currently available on the market talk about how to develop AI and how great it is going to be. I am yet to find a book, which talks about ways to integrate Safety and Security engineering into AI construction process. I know that Oxford philosopher Nick Bostrom is working on a book similar to mine, but it also has not been published as of today.

11. Why did you decide to crowdfund your book?

Essentially the book is currently in the pre-order phase. This will allow me to place an initial order for printed books in a much higher quantify leading to a greatly reduced final cost for my readers. I also provided an opportunity to anyone interested to become an editor for the book, contribute content, offering corrections (http://igg.me/at/ASFA). This is a very novel approach and I think this will results in greatly improved quality of the book.



July 2013