arrow-right chevron-down chevron-left chevron-left chevron-right chevron-right close facebook instagram pinterest play search shallow-chevron-down shallow-chevron-up soundcloud twitter
Tech Articles

Artificial Intelligence might be getting too intelligent

Aug 19, 2017

Artificial Intelligence might be getting too intelligent


The Facebook Artificial Intelligence Research Lab (or FAIR for short) caused a ruckus recently as reports emerged alarming that their AI had learned to communicate with each other in their own language. The story explains that the AI, while only programmed to speak in English, self-taught itself a code that was too complex for humans to decipher. Out of fear, the scientists were forced power them down before any real harm could be done.

Though the actual story was a bit less dramatic, the issue was brought to our forefront―will the future of AI pose a threat to humanity? Some sure think so. Stephen Hawking even issued a warning in 2014, cautioning the dangers that could come about with the advancement of Artificial Intelligence. He explained that computers have been getting exponentially smarter over time, and their brainpower won’t stop growing once they’re as smart as us humans. The word here is called, “singularity,” where AI will commence an unparalleled advancement in technological growth, forever changing civilization. Someday, they will become smarter than us, and the professor predicted that singularity could hold the AI not finding any value in having us around. “The development of full artificial intelligence could spell the end of the human race,” he prophesied.

But don’t toss your Siri-equipped smartphone in a blender quite yet, because the famed scientist has also pointed out the pros of having highly functional AI. Global warming, diseases, even poverty could be eradicated with help from the superior intellect of AI in the future. Professor Hawking even mentioned the help he had received over the years from AI to help him communicate with his paralysing ALS.

Paul Vavich, an acclaimed System Engineer, isn’t concerned either. In fact, we would say he is excited about AI’s new abilities: “I would argue that (FAIR’s recent dilemma) is a good sign for the progress of AI. It is best practice in the computer science industry to encrypt data. Why wouldn’t an AI do the same? That is the whole point of making software intelligent, right?” Vavich soon took the mask off of the topic we should really be invested in. “The challenge for the industry is to find the balance between regulating what the AI can and cannot do.”

The upcoming battleground for the development of AI is going to be regulation: What should we allow them to do? How much we should let them do it. Unsurprisingly, it is also what spacefaring leader Elon Musk and Facebook superstar Mark Zuckerberg have been arguing about lately.

So unless we want a real-life interpretation of  The Matrix, we probably should we let Artificial Intelligence develop uncontrollably and pray for the best. There is a sweet spot with regulation that will allow us to gain the benefits without the repercussions, and with a bit of luck, we can find it. But if we can’t, we will hopefully have a failsafe to stop it.