Tuesday, June 4, 2019

Arguments on Artificial Intelligence

Arguments on stylized IntelligenceWe live in an extraordinary time. Improvements in engineering seem to be accelerating at an unbelievable rate. Every time they think Moores Law has reached its limits, tech companies come up with a new level of capability. No less is the advancement of artificial intelligence (AI). Our every day lives argon already deeply immersed in AI, and we dont even know it. It controls much of the financial markets, performs honor enforcement tasks, and makes our internet searches more useful. Most AI today is weak AI, designed to perform a very specific task (Tegmark, n.d.). But the goal of tout ensemble research and corporate investment is always more what else can we know or do? Often, these entities are creating things in a vacuum, with limited moral, ethical, or level-headed boundaries. When is it too much? The driving force that makes us want to always explore further is what makes the development and use of artificial intelligence (AI) a forged cour se of action.Why is this a seeky course of action? Because giving control of systems to artificial intelligence could have seriously negative results. Take, for example, researchers working with the University of Pittsburgh medical checkup Center. In this case, they develop a neural network that returns suggestions for treatment of pneumonia patients. Using a historical database with the solutions and results of methods of treatment, the AI is supposed to provide suggested solutions to treat patients. In cardinal solution, it recommended that certain high risk patients be sent home (Bornstein, 2016). This solution had high probability of resulting in death.When working with and complex task, accomplished by man or machine, the law of unintended consequences mustiness always be considered. No matter how well someone thinks they have thought a system through, it is around impossible to consider every possible outcome. Certainly, unintended consequences are not all bad, many drug s have side effects that are beneficial and entirely not what the drugs was designed to do. On the other hand, many drugs have very negative side effects. Certainly, they are not intended to cause any ill symptoms, but many have severe unintended consequences, including death.Some would argue, AI is currently in use and benefits everyone with no negative effects. Singularity cannot happen. While we for certain use some types of AI currently and have had minimal negative effects. It is also true we have not reached singularity. It is the height of hubris to believe that we have do control over anything or that we have considered all possibilities. Consider Fukishima or Chernobyl, all possibilities were not covered and resulted in huge disasters.Even NASA, the standard for elaborate scrutiny of complex systems and procedures has had some harmful failures in the form of space shuttle crashes due to hubris of the organization and/or individuals.How many people died on the titanic? A send off that was unsinkable was sunk by a simple iceberg, or was it hubris? The shoddy steel used in the construction of the hull, the poorly designed bulkheads that didnt reach to the top deck, and the pressure to go as fast as it could are what sunk the ship. And not comme il faut life boats on the unsinkable ship killed the passengers. Hubris lead them down the path to destruction.We are at the point that we have the capability to combine AI to create autonomous military machines. Some are even in the testing phase of development. Machines that make decisions of life and death on their own (Russell, 2015). Absent human intervention, what is to keep one of these machines from deciding the maltreat person is a target. A machine knows no morality, no ethical code, only its programming, its goal or reason to exist. Given a powerful enough computational system, it could decide to use everything at its disposal to achieve its goals (Anderson, 2017). Things like taking control of infrastructure, or even humans.So, what do we do? Is there risk? Even captains of industry and experts like Gates, Musk, and Hawking suggest there is (Holley, 2015). It is clear we are already on the path to creating ever more complex and sufficient AI. We must recognize that we all make mistakes and constantly be on guard against mistakes and, more importantly, hubris. Most expansion of knowledge has risk. When confronted with a discipline that has catastrophic possibilities, we must fight the desire to run forward as fast as we can with no concern for the consequences. Methodical deliberation is the only course. We must consider the ramifications of each step and ensure safeguards are in place should we need to terminate or isolate any AI that develops goals counter to those of humans. If we shell out to be conscientious enough and adhere to ethical principles, we might, just might, keep from developing the instrument of our own demise.ReferencesAnderson, J. (2017, February 16) . Googles artificial intelligence getting greedy, and aggressive. Activist Post. Retrieved from http//www.activistpost.com/2017/02/googles-artificial-intelligence-getting-greedy-and-aggressive/Artificial Intelligence. (2015). In Opposing Viewpoints Online Collection. Detroit Gale. Retrieved from http//link.galegroup.com.ezproxy.libproxy.db.erau.edu/apps/doc/PC3010999273/OVIC?u=embryxid=415989d5Bornstein, A. (2016, September 1). Is artificial intelligence permanently inscrutable?Holley, P. (2015, January 29). Bill Gates on the dangers of artificial intelligence I dont understand why some people are not concerned. The Washington Post. Retrieved from https//www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/Russell, S. (2015, May 28). Take a stand on AI weapons. Nature, 521 (7553), 415-416.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.