World Advertising Report
SEE OTHER BRANDS

Global take on media and advertising news

Lenovo Taps AzurEngine’s AI Accelerator in ThinkBook Laptops

SAN DIEGO, CA, UNITED STATES, September 23, 2025 /EINPresswire.com/ -- AzurEngine Inc. (AZE), a leading provider of AI Accelerator chips, is pleased to announce that Lenovo is selling its ThinkBook with its AzureBlade m.2 AI AI Accelerator.

This is the industry’s first laptop with a fully integrated “discrete AI Network Processing Unit”. It was made possible thanks to the performance-packed, ultra-compact and low-power AZE AzureBlade m.2 card, enabling an ultra-thin, no-fan solution.

AZE’s AI Chip is the only ASIC based AI accelerator that combines GPU like programmability with ASIC class power, performance, and area efficiency and is engineered from the ground up to run native CUDA software.

“With its native CUDA software, AZE AI chips provide the only “true” second source to NVIDIA GPUs and enables Developers to easily run software developed on NVIDIA-based, Cloud systems”, said Yuan Li, CEO of AZE. CUDA compatibility also enables easy migration of algorithms, models, and software to Servers, Laptop and PCs.

AZE’s AI Accelerators range in performance and cost based on the market segment addressed. In this Thinkbook configuration, the AZE chip enables performance of 12-14 tokens per second on 8B LLM-like Llama3 and Qwen 2.0 AI, all while consuming just 8W of Power.

AZE’s AI Accelerators redefine what is possible for full, high-performance LLM AI inference processing locally on Laptop, PC and Server without having to rely on the Cloud. They enable improved latency, reduced Cloud bottleneck, more power efficient AI processing and, better data security and control.

About AzurEngine
AzurEngine Inc. (AZE), headquartered in San Diego, California, is a leading provider of high-performance AI Accelerator chips.
AZE’s AI Chip is the only ASIC based AI accelerator that combines GPU like programmability with ASIC class power, performance, and area efficiency and is engineered from the ground up to run native CUDA software. This makes the AZE AI chip the industry’s only true second source to NVIDIA GPU, with the benefits of lower power consumption, a smaller footprint, and superior performance at the same process node.
With full CUDA compatibility, software developed on NVIDIA based cloud systems can be deployed seamlessly on AZE hardware—maintaining performance while reducing power and space requirements.
AZE’s AI Accelerator scalable architecture and native CUDA support enable effortless migration of algorithms, models, and applications from the cloud to servers, laptops, and PCs—unlocking high efficiency AI processing across the entire compute spectrum.
Visit our website http://www.azurengine.com or contact us at info@azurengine.com

Ryan Braidwood
AzurEngine
marketing@azurengine.com
Visit us on social media:
LinkedIn

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms & Conditions