Mobile

Hands-On: Mobile AI with Gemma - iOS, Android

In our previous post, Mobile On-device AI: Smarter Faster Private Apps, we explored the fundamentals of running AI locally on mobile devices. Now, it’s time to get hands-on and see this technology in action!

This practical guide walks you through implementing mobile on-device AI using Google’s powerful Gemma model family, including the cutting-edge Gemma 3n. You’ll learn to deploy these models across iOS, Android, and web platforms using industry-standard frameworks.

What You’ll Learn Here

  • Test various Gemma models, including Gemma 3n, in Google AI Studio.
  • Run sample on-device AI applications on Android using tools like the Google AI Edge Gallery App and MediaPipe.
  • Implement on-device Large Language Model (LLM) inference on iOS using MediaPipe.
  • Explore how to run LLMs in mobile web browsers with JavaScript and MediaPipe.
  • Gain practical experience deploying and interacting with Gemma models across different mobile platforms.

Prerequisites: Basic mobile development knowledge helpful but not required.

Mobile On-device AI: Smarter Faster Private Apps

While cloud computing drives many AI breakthroughs, a parallel revolution is happening right in our hands - running LLMs locally on mobile devices. This emerging field, known as Mobile On-device AI, enables us to build more private, faster and smarter app experiences - especially as mobile devices become increasingly powerful. As a developer passionate about AI and mobile, I am fascinated by the convergence these two worlds and the possibilities it brings.