Search

The Alif Blog

The rights and wrongs of scaling generative AI software for the endpoint

The applications for generative AI that have captured the popular imagination, such as the ChatGPT or Gemini AI assistants or Adobe Firefly image creator, operate with models implemented in gargantuan code bases, and trained on data from the entire internet. Only cloud data centers have sufficient compute resources to run these applications and provide inferencing results with acceptable latency.

Read More

Comparing MCUs for generative AI: it’s not just about the GOPS!

In generative AI, raw throughput is a poor indicator of actual system performance: that’s because successful generative AI applications work by supporting transformer operators, shifting large amounts of data inside the system, between memory, the NPU, the CPU, and peripheral functions such as an image signal processor.

Read More

How different should you want your AI MCU to be?

We are in the early days of implementing AI at the edge, and the market for edge AI processors has not yet settled into a stable, mature state. In an industry as dynamic and innovative as the semiconductor business, that means that competition is intense, and many companies with a wide variety of product concepts are emerging.

Read More

A vision of the future: MCU-based edge AI devices to take center stage in consumers’ digital lives

AI offers the biggest opportunity for many endpoint or edge IoT device manufacturers to add value to their products and achieve meaningful differentiation. But because of edge AI devices’ limited resources – power, processor bandwidth and size – manufacturers are continually bumping up against the ceiling of what they can do in AI with a microcontroller.

Read More

loading

No results…

X

(Required)
This field is for validation purposes and should be left unchanged.