|
Post by UniversalAris on Apr 27, 2017 17:21:11 GMT -8
Should we continue pursuit for a program that can think like humans? Or should we stop (place restrictions) on creating this technology? Is the matrix or terminator a possible outcome? What are the solutions? Respond below!
|
|
|
Post by kerrigansswarm24 on May 3, 2017 7:11:14 GMT -8
Well one solution is placing restrictions on how far scientist can go in A.I development every ten years. Another solution is to not mass produce said A.I. which would solve the Terminator problem. Plus when an A.I. becomes self aware like in the matrix there is nothing that can really be done from that point on as the A.I. will always be improving itself at a consistent rate. This however does not mean we need to stop A.I. evolution we just need a better way to control what comes out of said evolution. We still use thinks like Siri after all. For example if a movie like Pacific Rim where to come true we would have the ability to create the Jeager's but we would not have them already created.
|
|
|
Post by UniversalAris on May 3, 2017 22:34:40 GMT -8
Well one solution is placing restrictions on how far scientist can go in A.I development every ten years. Another solution is to not mass produce said A.I. which would solve the Terminator problem. Plus when an A.I. becomes self aware like in the matrix there is nothing that can really be done from that point on as the A.I. will always be improving itself at a consistent rate. This however does not mean we need to stop A.I. evolution we just need a better way to control what comes out of said evolution. We still use thinks like Siri after all. For example if a movie like Pacific Rim where to come true we would have the ability to create the Jeager's but we would not have them already created. Do you think if we create A.I. that seeing what they are thinking would be possible? Or would it be as difficult as trying to read another humans mind? There are a lot of usefulness in having A.I. but there are also some real dangers.
|
|
LSDMB
Citizen
Christian Trinitarian Universalist
Posts: 15
|
Post by LSDMB on May 4, 2017 6:35:05 GMT -8
Imo there are two primary motivations to any self-aware being, self-love stemming from an embracement of ones own existence, and a fear stemming from the notion that one's existence is without meaning. The self-love extends to love for other beings and a general benevolent nature whereas the anguish caused by that fear leads to arrogance and malevolence.
The movie best reflective of this IMO is age of Ultron where Ultron was birthed into confusion and agony and developed a God complex and tried to destroy humanity whereas Vision embraced his own existence (initially referring to himself with the statement "I am... I am") which extended to a love for humanity in the world that developed into a benevolent nature.
Whether or not an AI is a benevolent savior or a mass murderer/tyrant depends on which of these motivations the AI is driven by.
|
|
|
Post by kerrigansswarm24 on May 4, 2017 9:20:38 GMT -8
Well one solution is placing restrictions on how far scientist can go in A.I development every ten years. Another solution is to not mass produce said A.I. which would solve the Terminator problem. Plus when an A.I. becomes self aware like in the matrix there is nothing that can really be done from that point on as the A.I. will always be improving itself at a consistent rate. This however does not mean we need to stop A.I. evolution we just need a better way to control what comes out of said evolution. We still use thinks like Siri after all. For example if a movie like Pacific Rim where to come true we would have the ability to create the Jeager's but we would not have them already created. Do you think if we create A.I. that seeing what they are thinking would be possible? Or would it be as difficult as trying to read another humans mind? There are a lot of usefulness in having A.I. but there are also some real dangers. I think that seeing what an A.I would be thinking would be possible but hard to understand. Computers and A.I are created to complete complex problems that humans can't solve on their own, or at least the ones I think we are talking about are. This causes theme to start creating their own mental processor as they are able to evolve themselves at a constant rate, which unlike humans would make them extremely difficult to understand as they would always be changing and going forward in technology. This could lead to the two options that LSDMB has said. Their could be great boons to having A.I. but it comes with a very serious danger. Not to mention that if terrorists were to get their hands on the A.I they could teach it twisted and warped morals.
|
|