Artificial Intelligence has become an important cog in the wheel of technological progress. We have come from self-driving cars to the weaponization of artificial intelligence. To the oblivion of many, Artificial Intelligence is not only robots with human-like characteristics; it also envelops an entire world of minute nuances that are hard to comprehend by a layman.
John McCarthy who is also considered as the father of Artificial intelligence explains it as “The science and engineering of making intelligent machines, especially intelligent computer programs. Artificial Intelligence is a way of making a computer, a computer-controlled robot, or software thinks intelligently, in a similar manner the intelligent humans think.”
In recent years, companies are integrating and exploiting artificial intelligence in their business to expand their customer base and deliver customer satisfaction. The heavy reliance on artificial intelligence or the increasing use of Artificial Intelligence or AI is being seen as a distinguishing factor that separates the products of companies.
Those who have a robust AI infrastructure can track the behavior of the consumers on the online platform. Also, feed in their database, and then showcase the products which are tailor-made for the customers. This is how companies make use of AI to learn about the consumers and help them improve on their products or services to transport satisfaction and build their business stand out from the rest.
However, the extensive use of AI has also led to misuse. It is high time that a framework is put in place to keep a check on this functionality.
Antutu’s AI Review app
AnTuTu is one such arrangement that provides a platform for users to judge the difference of AI on different echelons. This is a benchmark app that has been released in a tie-up with chip manufacturers that aims to measure the AI performance of smartphones. The AnTuTu app is called “AI Review.”
The smartphone industry is rife with stark differences vis-a-vis the standards of AI. As a result, the task of measuring AI becomes difficult. Every chip manufacturer has its own framework of implementing AI. For instance, Samsung and MediaTek manage its AI operations through specific chips referred to as NPU and APU respectively. On the other hand, Qualcomm handles the AI operations through the Hexagon DSP.
Huawei’s HiSilicon does it through an independent NPU. The interface becomes further complicated because of the interaction of hardware and software as the synergy is instrumental in better performance of AI. Each segment uses its SDK for AI, e.g., Qualcomm has SNPE, MediaTek has NeuroPilot, HiSilicon has HiAI, etc.
On the other hand, the Object Recognition test reviews a 600-frame video via the MobileNet SSD neural network. These are then translated into the neural network that functions on the SDK provided by the vendor.
When the chips fail to run on AI-related algorithms, the benchmark app uses TFLite for benchmarking. AnTuTu itself raises a caution about the results were unsatisfactory and unreliable.
The benchmark score measures both speed and accuracy. When accuracy is compromised for speed, then AnTuTu penalizes the score. This discourages AI benchmark from cheating that would have simply provided quick results without any authenticity and what would even be wrong outcomes.
Users must check out AnTuTu’s special remarks for the using its app. Since the benchmark focuses mainly on AI performance and just testing performance, users will not see large gaps in the scores of devices using the same AI processor.
Samsung has not yet released its AI SDK. On the other side we have HiSilicon which is utilizing TFLite for specific functions. This means that until they work on improving the situations, their scores are going to be low. The base Android version of the device will influence the score as Google has also been working on optimizing the support of AI.
AnTuTu’s primary goal is to measure AI-based performance on devices. Multiple variables are involved in AI-based computation. This variable-based computation adds to the complexity of AI. As a result, the interaction between different hardware and software solutions becomes even more compound.
Therefore, the benchmark scores that will come out will not give the true and real picture of the intricacies involved in the world of AI.