Source: content.techgig.com
Artificial intelligence (AI) is not a new term. It has been around for years and first used in the mid-1950s. Since its inception, AI has been successful in enabling computers to perform some tasks that are normally done by humans.
Since AI became more prevalent in the last few years, there are myths around this technology. The idea of machines learning and making decisions like the human brain is itself seen as the biggest threat. Scientists around the world have been warning about the dangers of AI for ages.
The first such claim was put forth in 1958 by Herbert Simon and Allen Newell. They wrote, “There are now machines in the world that think, learn, and create. Furthermore, their ability to do these things will rapidly increase until – in the visible future – the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”
Myth #1: AI is smarter than people.
There is no intelligence without the human brain. The people who create the algorithms and provide information to it make up the AI. To build and teach it, you need to feed information. AI is as smart as you program it.
Myth #2: AI will make medical diagnoses
Medical professionals use technology for efficiency. A radiologist who is an expert in the evaluation of X-rays, CT scans and other medical imagery will use AI for easy primary level of diagnosis. However, a human doctor will be the one determining a diagnosis and making medical decisions.
Myth #3: Modeling determines the outcome
AI initiatives begin as test projects. You may get excellent results during the testing phase but the final results come after you deploy it to production. Training your AI model is never complete. As you feed more data to the model, it evolved and accuracy goes up.