AI Weekly: Qualcomm’s AI analysis and improvement efforts

Join Transform 2021 this July 12-16. Register for the AI event of the year.

This week marked the beginning of the International Conference on Learning Representations (ICLR) 2021, an occasion devoted to analysis in deep studying, a subfield of AI impressed by the construction of the mind. One of the world’s largest machine studying conferences, it accepted 860 analysis papers from 1000’s of members this 12 months, up from 687 papers in 2020.

One of the collaborating researchers is Jilei Hou, VP of engineering at Qualcomm. He heads up Qualcomm’s AI Research division, which focuses on advancing AI to convey its core capabilities — together with notion, reasoning, and motion — to Qualcomm’s portfolio of {hardware} merchandise. Together along with his colleagues on the firm, Hou introduced new papers at ICLR within the areas of energy and vitality effectivity, laptop imaginative and prescient, pure language processing, and machine studying fundamentals.

Qualcomm’s analysis, whereas in some instances preliminary, is impactful by nature of the corporate’s market footprint. In the second quarter of 2020, Qualcomm accounted for 32% of worldwide smartphone software processor income, according to Statista. And as of January 2017, the corporate had shipped greater than a billion chips for the web of issues alone.

Improved effectivity

An vital analysis path for Qualcomm is studying illustration, which might permit AI methods to be taught with excessive knowledge effectivity, in addition to generalizability. At ICLR, Hou detailed the corporate’s work in unsupervised studying, the place an algorithm is subjected to “unknown” knowledge for which beforehand outlined classes or labels don’t exist. The machine studying system should educate itself to categorise the info, processing the unlabeled knowledge to be taught from its inherent construction.

Hou says his group achieved state-of-the-art efficiency with end-to-end studying for video compression, a key use case for Qualcomm’s cell machine prospects. Beyond this, he and coauthors have explored “neural augmentation,” or the idea that classical algorithms and neural community architectures could be mixed to include scientific data.

Neural augmentation is actually the wedding of neural networks and symbolic AI, which includes embedding details and conduct guidelines into fashions. As against neural networks, which map knowledge inputs and outputs, symbolic AI can encode data or packages. The neural networks assist determine delicate patterns that could be too complicated to mannequin explicitly.

Hou believes that neural augmentation might lead to compact neural community mannequin sizes and superior effectivity whereas coaching. His group has already seen success inside the areas of wi-fi, multimedia, and methods design.

More lately, Hou and colleagues investigated utilizing machine studying as a design methodology for combinatorial optimization issues like car site visitors routing and chip design placement. They declare to have skilled specialised fashions with unlabeled knowledge and reinforcement studying, which offers with studying through interplay and steady suggestions. “We believe that the intersection of machine learning and combinatorial optimization will produce profound interest in the machine learning research community, as well as toward industrial impact,” Hou advised VentureBeat.

Computer imaginative and prescient and knowledge privateness

In the pc imaginative and prescient area, a number of of Hou’s tasks goal segmentation. Object segmentation is utilized in duties starting from swapping out the background of a video chat to instructing robots that navigate by way of a manufacturing facility. But it’s thought of among the many hardest challenges in laptop imaginative and prescient as a result of it requires an AI to know what’s in a picture.

A Qualcomm-authored paper particulars enhancements within the accuracy of segmentation, and one other describes the quickest video segmentation so far on Qualcomm’s Snapdragon chipsets. Hou and colleagues additionally created a mannequin that improves the consistency of segmentation whereas permitting fine-tuning on a cell machine.

One of the methods Hou goals to achieve efficiency good points is thru neural structure search (NAS) methods. NAS teases out high mannequin architectures for duties by testing candidate fashions’ total efficiency, dishing out with guide fine-tuning. In a complementary effort, Hou says Qualcomm is investing in personalization and federated learning applied sciences that permit neural community fashions to repeatedly be taught on-device whereas maintaining knowledge with customers, within the pursuits of privateness.

“The mission of Qualcomm AI Research is to make AI ubiquitous,” Hou stated. “Qualcomm AI Research is taking a holistic approach to model efficiency research via research efforts in quantization, compression, NAS, and compilation … By creating these projects and making it easy for developers to use them, we are empowering the ecosystem to run complex AI workloads efficiently. [They’re] already helping the wider AI ecosystem and having real-world impact on a variety of industry verticals.”

For AI protection, ship information tricks to Kyle Wiggers — and you should definitely subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for studying,

Kyle Wiggers

AI Staff Writer


VentureBeat’s mission is to be a digital city sq. for technical decision-makers to realize data about transformative expertise and transact.

Our web site delivers important info on knowledge applied sciences and methods to information you as you lead your organizations. We invite you to grow to be a member of our neighborhood, to entry:

  • up-to-date info on the topics of curiosity to you
  • our newsletters
  • gated thought-leader content material and discounted entry to our prized occasions, akin to Transform 2021: Learn More
  • networking options, and extra

Become a member


Please enter your comment!
Please enter your name here