Source – https://www.bbntimes.com/
“AI” is becoming a construct that has been the subject of increasing attention in technology, media, business, industry, government and civil life during recent years.
Today’s AI is the subject of controversy. You might have heard about narrow/weak, general/strong/human level and super artificial intelligence, or about machine learning, deep learning, reinforced learning, supervised and unsupervised learning, neural networks, Bayesian networks, NLP, and a whole lot of other confusing terms, all dubbed as AI techniques.
Many of the rules and logic-based systems that were previously considered Artificial Intelligence are no longer AI. In contrast, systems that analyze and find patterns in data are dubbed as machine learning, widely promoted as the dominant form of AI.
What is Wrong with Today’s AI, Its Chips and Platforms?
All the confusion comes from an anthropomorphic Artificial Intelligence, AAI, the simulation of the human brain using artificial neural networks, as if they substitute for the biological neural networks in our brains. A neural network is made up of a bunch of neural nodes (functional units) which work together, and can be called upon to execute a model.
Thus, the main purpose in 2021 is to provide a conceptual framework to define Machine Intelligence and Learning. And the first step to create MI is to understand its nature or concept against main research questions (why, what, who, when, where, how).
So, describe AI to people as an AAI or augmented intelligence or advanced statistics, not artificial intelligence or machine intelligence.
Now, re the levels of AAI applications, tools, and platforms?
Lets focus only on “AAI chips”, forming the brain of an AAI System, replacing CPUs and GPUs, and where most progress has to be achieved.
While typically GPUs are better than CPUs when it comes to AI processing, they usually fail, being specialized in computer graphics and image processing, not neural networks.
The AAI industry needs specialised processors to enable efficient processing of AAI applications, modelling and inference. As a result, chip designers are now working to create specialized processing units.
These come under many names, such as NPU, TPU, DPU, SPU etc., but a catchall term can be the AAI processing unit (AAI PU), forming the brain of an AAI System on a chip (SoC).
It is also added with 1. the neural processing unit or the matrix multiplication engine where the core operations of an AAI SoC are carried out; 2. Controller processors, based on RISC-V, ARM, or custom-logic instruction set architectures (ISA) to control and communicate with all the other blocks and the external processor; 3. SRAM; 4. I/O; 5. the interconnect fabric between the processors (AAI PU, controllers) and all the other modules on the SoC.
The AAI PU was created to execute ML algorithms, typically by operating on predictive models such as artificial neural networks. They are usually classified as either training or inference generally performed independently.
AAI PUs are generally required for the following:
- Accelerate the computation of ML tasks by several folds (nearly 10K times) as compared to GPUs
- Consume low power and improve resource utilization for ML tasks as compared to GPUs and CPUs
Unlike CPUs and GPUs, the design of single-action AAI SoC is far from mature.
Specialized AI chips deal with specialized ANNs, and are designed to do two things with them: task-designed training and inference, only for facial recognition, gesture recognition, natural language processing, image searching, spam filtering, etc.
In all, there are {Cloud, Edge, Inference, Training} chips for AAI models of specific tasks. Examples of Cloud + Training chips include NVIDIA’s DGX-2 system, which totals 2 petaFLOPS of processing power, made up of 16 NVIDIA V100 Tensor Core GPUs, or Intel Habana’s Gaudi chip or Facebook photos or Google translate.
Sample chips here include Qualcomm’s Cloud AI 100, which are large chips used for AAI in massive cloud datacentres. Another example is Alibaba’s Huanguang 800, or Graphcore’s Colossus MK2 GC200 IPU.
Now (Cloud + Inference) chips were used to train Facebook’s photos or Google Translate, to process the data you input using the models these companies created. Other examples include AAI chatbots or most AAI-powered services run by large technology companies. Here is also Qualcomm’s Cloud AI 100, which are large chips used for AAI in massive cloud datacentres, Alibaba’s Huanguang 800, or Graphcore’s Colossus MK2 GC200 IPU.
(Edge + Inference) on-device chips examples include Kneron’s own chips, including the KL520 and recently launched KL720 chip, which are lower-power, cost-efficient chips designed for on-device use; Intel Movidius and Google’s Coral TPU.
All of these different types of chips, training or inference, and their different implementations, models, and use cases are expected to develop the AAI of Things (AAIoT) future.
How to Make a True Artificial Intelligence Platform
In order to create a platform neutral software operating with world’s data/information/content which could run/display properly on any type of computer, cell phone, device or technology platform, the following are required:
- Operating Systems.
- Computing/Hardware/Cloud Platforms.
- Database Platforms.
- Storage Platforms.
- Application Platforms.
- Mobile Platforms.
- Web Platforms.
- Content Management Systems.
The AI programming language should act as both the general programming language and computing platform. Its applications could be launched on any operating system and hardware, from mobile-based operating systems, as Linux or Android, to hardware-based platforms, from game consoles to supercomputers or quantum machines.