WAIC 2024|Dialogue with Qiu Xiaoxin of Aixin Yuanzhi: Smart chips and multimodal large models have become the “golden combination” in the AI ​​era_Domestic Economy_Finance_Securities Star

At the “Core Leading the Future | Intelligent Chips and Multimodal Large Models Forum” of the World Artificial Intelligence Conference (WAIC) held on July 5, AI perception and edge computing chip platform company Aixin Yuanzhi unveiled the “Aixin Tongyuan AI Processor.” The core of this AI processor is the operator instruction set and data flow micro-architecture, featuring three levels of computing power: high, medium, and low. It has achieved mass production in the two scenarios of smart cities and assisted driving, and is also utilized in general large-model products, such as text search, general detection, image generation, AI agents, and more.

The Economic Observer observed at the Aixin Yuanzhi exhibition area that devices equipped with the Aixin Tongyuan series edge chip AX630C, supported by Alibaba Cloud Tongyi Qianwen Qwen2.0 large model, enabled smooth human-computer dialogue. Furthermore, the AX650N chip, combined with Mianbi Intelligence’s MiniCPM V2.0 large model, allowed edge devices to support AI models, enabling generative AI experiences such as image-generated text.

“Smart chips and multimodal large models have become the ‘golden combination’ of the AI era,” Qiu Xiaoxin, founder and chairman of Aixin Yuanzhi, stated in his speech. He highlighted that the growing application of large models necessitates more economical, efficient, and environmentally friendly smart chip development. The presence of efficient inference chips with an AI processor is crucial for the implementation of large models in equipment.

Founded in 2019, Aixin Yuanzhi has independently developed and mass-produced a variety of AI chips, including smart application chips, coaxial high-definition transmission (TX) chips and receiving (RX) chips, hybrid digital-analog chips, and microcontroller unit (MCU) chips.

Unlike some chip manufacturers who focus on intelligent computing centers, Aixin Yuanzhi’s business scenario layout prioritizes the edge and device sides. In Qiu Xiaoxin’s perspective, the large-scale implementation of large models requires a close integration of cloud, edge, and device. The key to the integration of edge and device lies in AI computing and perception, which is precisely where Aixin Yuanzhi’s strength lies.

【dialogue】

Economic Observer: How many large-scale parameters can Aixin Yuanzhi’s existing chips support for the deployment of large models on the end?

Qiu Xiaoxin: Currently, it supports 7B (i.e., 7 billion trainable parameters). We are also observing the large model scale Apple has implemented on its mobile phones, demonstrating the applicability of 3B models on Apple devices with real-world applications. We anticipate that the chips installed on edge and terminal devices will mainly be in the 3B-7B parameter scale in the future, with 7B potentially offering higher performance.

Naturally, this reflects the current performance. AI development is a continuous process of joint optimization of chips and algorithms. Alongside enhancing chip capabilities to support large model implementation, we need to optimize algorithms, such as by lightweighting, to further improve performance.

Previously, the 7B model might not have been able to run on a 3.2T chip (referring to a computing power of 3.2 trillion operations per second). However, if the chip is equipped with a mixed-precision neural network processor (NPU), it would reduce processing power and storage requirements under the same number of parameters, enabling the 7B model to run on such a chip.

Therefore, implementing large models necessitates continuous hardware and algorithm optimization, particularly the optimization iteration from the cloud side to the edge side.

Economic Observer: Some chip manufacturers are involved in the construction of intelligent computing centers. While there is the cost of training large models, the return rate is higher. Has Aixin Yuanzhi considered building an intelligent computing center?

Qiu Xiaoxin: Our chip architecture is highly suitable for AI reasoning. However, intelligent computing centers have two primary tasks: training and reasoning.

Will intelligent computing centers become readily buildable facilities in the future? I believe it’s possible. But at this stage, I remain focused on Aixin Yuanzhi’s efforts in the vast market on the edge and end sides.

Economic Observer: In addition to automotive chips, will Aixin Yuanzhi have a second curve?

Qiu Xiaoxin: Aixin Yuanzhi is an edge and terminal artificial intelligence chip company. While the edge and terminal markets are relatively fragmented compared to cloud data centers, our core technology is applicable across different segments. Vehicles represent just one of the high-growth tracks. Prior to this, we successfully ventured into the field of artificial intelligence Internet of Things (AIoT) such as smart cities.

In-vehicle computing constitutes our second growth curve, and the third curve we’ve begun exploring is edge computing. Many central processing unit (CPU) servers currently have a strong demand for upgrades to AI servers. Integrating Aixin Yuanzhi’s plug-and-play AI acceleration card offers an ideal solution. Additionally, AI computing boxes, AI all-in-one machines, and other product forms can empower various industries.

Economic Observer: Embodied intelligence gained prominence at last year’s WAIC. Does Aixin Yuanzhi have any plans in this regard?

Qiu Xiaoxin: We’ve always been optimistic regarding this direction. The technologies required for embodied intelligence include visual processing. Robots require various sensors and a powerful AI processor. We’re also engaging with customers on how to use Aixin Yuanzhi’s AI chips to implement products such as embodied intelligence.

It’s important to acknowledge that the current embodied intelligence product form is still in its early stages. For chip companies, it’s essential to proactively explore and accumulate technical know-how within an industry. While embodied intelligence and vehicle-mounted solutions can share chips, the application scenarios or required capabilities might differ. Therefore, we must gain industry knowledge through collaborations. For example, for embodied intelligence or robot product forms, what specialized functions should the chip be designed for? Once these questions are answered and relevant products are ready for mass production, we can launch dedicated chips.

Any scenario application related to “vision + AI” is a potential target scenario if the market is large enough.

Economic Observer: What’s the current state of large model implementation on the end-side?

Qiu Xiaoxin: It’s definitely in the early exploration phase. I believe the initial implementations will be seen in cars, mobile phones, and AI PCs. Cars require real-time responses. For instance, smart cockpits represent typical application scenarios for AI Agents, such as human-computer interaction and controlling cockpit functions.

Economic Observer: The autonomous driving (L2) market is currently very competitive. Will this “competition” be transmitted upward to Aixin Yuanzhi, one of the suppliers?

Qiu Xiaoxin: Market transmission is an objective reality, typically spreading upward from end customers one level at a time. As suppliers, we can focus on reducing costs and enhancing efficiency.

How to reduce costs? It’s challenging to spread R&D costs if we solely focus on automotive chips. Therefore, we should utilize platform technologies, such as our artificial intelligence image signal processor (AI-ISP) and NPU, to develop universal IP. This allows us to add vehicle-specific functions like functional safety, effectively spreading R&D costs.

Ultimately, the chip business revolves around scale. The output of automotive-grade chips alone isn’t substantial enough. The chip business needs to transform into a platform business to share R&D and supply chain costs and achieve a closed business loop.

Economic Observer: Will Aixin Yuanzhi use platform-based logic to practice more chip series?

Qiu Xiaoxin: Aixin Yuanzhi aims to cover as many different product forms as possible with a single chip. For the fragmented market, creating a separate chip for each product form would be too costly.

From a chip perspective, mobile phones have the highest shipment volume, prompting chip manufacturers to develop chips of various specifications and levels due to the scale that allows for R&D cost distribution. However, in the AIoT field, most single markets lack the shipment volume to support the R&D expenditure of a dedicated chip. Therefore, the chip needs to be universal – the commonality of the market is abstracted so the chip can cater to a wider range of applications.

For Aixin Yuanzhi, perception vision and AI have become fundamental requirements for numerous product forms. Moreover, low power consumption and high energy efficiency are increasingly valued by the market. When these commonalities are refined, and a chip can encompass a sufficient number of scenarios, chip companies will have room for profitability.

Aixin Yuanzhi’s AI Processor: Enabling Large Models on the Edge

At the World Artificial Intelligence Conference (WAIC) held on July 5, AI perception and edge computing chip platform company Aixin Yuanzhi unveiled its “Aixin Tongyuan AI Processor.” This processor, specifically designed for AI applications, boasts a unique operator instruction set and data flow micro-architecture. Notably, it offers three tiers of computing power: high, medium, and low, catering to diverse computational demands.

Aixin Tongyuan: Bridging the Gap Between Cloud and Edge

Aixin Yuanzhi’s key focus lies in bridging the gap between cloud computing and edge devices, making powerful AI capabilities accessible at the edge. This strategy is driven by the increasing adoption of large language models (LLMs) in various applications, requiring efficient, cost-effective, and environmentally friendly solutions. The Aixin Tongyuan processor aims to address this by enabling large models to run directly on edge devices.

Mass Production and Application Scenarios

Beyond its technical innovations, Aixin Tongyuan has already achieved mass production in two prominent scenarios: smart cities and assisted driving. This signifies the processor’s practical applicability and market relevance. Moreover, its versatility extends to general large model products, including:

  • Text search
  • General detection
  • Image generation
  • AI agent

The Aixin Tongyuan series edge chip AX630C, coupled with Alibaba Cloud Tongyi Qianwen Qwen2.0 large model, facilitates seamless human-computer dialogue. Similarly, the AX650N chip, paired with Mianbi Intelligence’s MiniCPM V2.0 large model, empowers edge devices to support AI models and deliver generative AI experiences, such as image-generated text.

Aixin Yuanzhi’s Vision: Embracing the Edge

Aixin Yuanzhi distinguishes itself from other chip manufacturers by actively focusing on the edge and device side, rather than solely on intelligent computing centers. This strategic focus reflects the company’s belief that the true widespread implementation of large models necessitates seamless integration across cloud, edge, and device.

Qiu Xiaoxin, Aixin Yuanzhi’s founder and chairman, emphasizes that AI computing and perception are crucial for achieving this integration, highlighting the company’s competitive edge in this domain.

Aixin Tongyuan’s Large Model Deployment Capabilities

Scaling Large Models Efficiently

Aixin Yuanzhi’s existing chips can currently support large models with up to 7B (7 billion) trainable parameters. This capacity aligns with the trend observed with Apple, where 3B models have demonstrated practical applications on mobile devices. The company projects that edge and terminal devices will predominantly utilize models within the 3B-7B parameter range in the future, with 7B potentially offering even higher performance.

Optimizing for Edge Deployment

Recognizing the ongoing evolution of AI, Aixin Yuanzhi acknowledges that chip and algorithm co-optimization is crucial. Alongside enhancing chip capabilities to accommodate large model deployment, the company sees significant potential in algorithm optimization, particularly lightweighting. This involves reducing the processing power and storage requirements for a given number of parameters.

Aixin Yuanzhi’s mixed-precision neural network processor (NPU) plays a key role in this optimization. It allows models with 7B parameters to run on a 3.2T chip (3.2 trillion operations per second), demonstrating the effectiveness of dedicated hardware and software optimization for edge deployments.

Aixin Yuanzhi’s Business Strategy: Expanding Beyond Automotive Chips

While Aixin Yuanzhi’s chips are currently widely used in automotive applications, the company is actively pursuing a multi-faceted approach, expanding beyond its initial focus. This strategy aims to capitalize on the platform technology behind its processors, enabling wider applications.

Diversification: Edge Computing and AI Server Enhancement

Aixin Yuanzhi’s second growth curve lies in the burgeoning edge computing market. The growing demand for AI servers presents an opportunity for Aixin Yuanzhi to provide plug-and-play AI acceleration cards, significantly enhancing server capabilities.

Beyond this, the company is exploring other product forms, such as AI computing boxes and AI all-in-one machines, to empower diverse industries with AI capabilities.

Embracing Embodied Intelligence

Recognizing the potential of embodied intelligence, Aixin Yuanzhi is actively exploring this burgeoning market. The company sees its vision and AI processing prowess as crucial elements for enabling such applications. Robots, with their reliance on various sensors and powerful AI processors, represent a promising target area for Aixin Yuanzhi.

Despite the current early stage of embodied intelligence product development, Aixin Yuanzhi is strategically engaging with customers to explore how its AI chips can effectively power these emerging technologies.

The Future of Large Model Deployment on Edge Devices

The implementation of large models on edge devices is still in its early stages. Aixin Yuanzhi identifies cars, mobile phones, and AI PCs as the initial priority for such deployment. This is driven by the need for real-time responses, exemplified by smart cockpits in cars, which serve as ideal venues for AI Agent-powered applications, such as voice-based control and human-computer interaction.

Navigating Market Competition and Cost Reduction Strategies

The intense competition in the L2 autonomous driving market is naturally filtering up to suppliers like Aixin Yuanzhi. However, the company’s strategy focuses on cost reduction and efficiency enhancement to cope with this competitive landscape.

Platform-Based Approach for Cost Optimization

By leveraging its platform technology, specifically AI-ISP (Artificial Intelligence Image Signal Processor) and NPU, Aixin Yuanzhi aims to develop universal IP solutions. This allows for the addition of vehicle-specific functions, such as functional safety, while spreading R&D costs across a wider range of products.

Moreover, Aixin Yuanzhi recognizes the importance of scale in the chip business. Leveraging its platform approach allows it to share R&D and supply chain costs, fostering a closed business loop.

Aixin Yuanzhi’s Commitment to Universal Chip Solutions

Aixin Yuanzhi’s vision is to cater to various product forms with a single, versatile chip. The company aims to overcome the limitations of developing dedicated chips for fragmented markets, where volume constraints hinder R&D expenditure. Instead, it prioritizes the identification of commonalities across diverse market segments, ensuring that its chip can effectively address a broader range of application scenarios.

Perception vision, AI capabilities, energy efficiency, and low power consumption are key commonalities Aixin Yuanzhi has identified. By optimizing its chips to address these fundamental needs, the company aims to achieve profitability by covering a wider range of markets.

fund

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.