Artificial intelligence (AI) has seen a historic rise in the public eye and within businesses. With a plethora of astounding abilities, this technology is highly remarkable. But its rise does not come without concern.
The primary critique relates to exactly how ‘intelligent’ artificial intelligence has become. Even in its current form, which is in the infancy for the technology, AI has passed the Turing test. Essentially, this means they are intelligent enough to pass as a real human beings. It has yet to pass the Lovelace test, which requires AI to create something inexplicable by its creators. This test is meant to signify that technology is able to express creativity and possesses the ability to create new things. Although AI has yet to pass the Lovelace test, the rapid evolution of the technology insinuates it may soon achieve this milestone.
So far, we have discovered that subconscious biases and judgments impress upon the AI technology being taught. Because of this, the objectivity of AI is always called into question, even though humans often rely upon AI to be fair. Furthermore, the novelty of this technology ensures that there are likely more unforeseen consequences we haven’t discovered yet.
Consequently, major figures like Elon Musk, Steve Wozniak, and Emad Mostaque all wish to halt or significantly slow down AI development. Since this technology is unprecedented, the risk associated with AI is also undefined, which makes AI inherently dangerous. Similarly, the unprecedented nature of AI also means we likely do not have the tools to control it effectively, should something negative happen. According to critics, these inherent dangers and concerns must be mitigated by stopping rapid development and ensuring delicate care is taken when teaching AI. As the future presses forward with AI, we must ensure creating such powerful technology does not become a powerful tool against its creators.
Source: Academic Influence