In this post, we will compare the most popular AI tools (frontier models and AI assistants) based on its capabilities, limitations and my personal experience of using them day-to-day.
In order to accomodate the multiple dimensions of each model, this comparison is represented as mind maps in three parts:
Language models - the most popular text-based models that are used for text-to-text content generation. AI assistants - the chatbots that are powered by one or more of the above models.
Imagine a world where every website adapts to your specific needs in real-time, securely and easily, without selling your data to third party companies. It will be cool, right? Yes and it is possible - thanks to Browser extensions.
In this post, we will learn about browser extensions - what they are, why you should build them and how to build them. We will conclude by looking at a few issues that come up frequently while building an extension and how to troubleshoot them.
Nine months ago, I started the Reimagine Journey to shift my career from engineering leadership to hands-on technology, specifically focusing on my health, personal growth and advanced tech skills. The long extended leave gave me the time and space to reflect on what I truly love and shape the next chapter of my career. It gave me the opportunity to determine how I want to live the rest of my life.
Last week, I watched an interview of Aravind Srinivas, the CEO of Perplexity AI (https://www.perplexity.ai). It is a three-hour interview done by Lex Fridman where Aravind talked about the major breakthroughs in AI that brought us to LLMs, the mission of Perplexity, how the technology works, his vision of the future of search and web in general, and some valuable advice for startup founders and young people.
Fascinating interview - highly recommended for everyone to watch.
Perplexity AI (https://www.perplexity.ai) has been gaining attention in the world of chatbots and large language models. I had heard about it in a few forums and mentioned by industry leaders like Jensen Huang and Kelsey Hightower. In fact, I had created an account and tried it out a few times earlier this year, but didn’t take it much seriously.
All that changed last week when I watched this recent interview of Perplexity CEO Aravind Srinivas by Lex Fridman.
The standout feature unveiled at this week’s Apple WWDC 2024 event was Apple Intelligence, a personal intelligence system that will be integrated into multiple platforms - iOS 18, iPadOS 18 and macOS Sequoia.
What is Apple Intelligence? Apple Intelligence comprises of multiple highly-capable and efficient generative models - large language models and diffusion models. These models include on-device models as well as server-based foundation models.
The foundation models are trained on Apple’s open-source AXLearn library for deep learning, built on top of JAX (Python library for accelerated computing and transformation) and XLA (Accelerated Linear Algebra, an open-source ML compiler).
A few months ago, Next.js introduced App Router, a new way to build React applications using the latest features like React Server components and streaming. This was included in Next.js version 13 and is meant to replace the Pages Router eventually. I have been using the App Router for all my builder projects for a while now. In fact, I usually kicked off projects with the standard create-next-app script that starts a new app from scratch.
Of late, I have been diving into Next.js and absolutely loving it. I wanted to put my learnings to practice, so I started building an app to solve a personal pain point. Along the way, I wanted to add authentication for my app and decided to use NextAuth, the go-to auth library for Next.js. Overall, it was a great experience, with a few humps along the way, but in the end, it all worked out well.
Today, Meta AI announced they are releasing a new model Code Llama 70B, a higher performing LLM to generate code. This was exciting news and we have to try it out immediately.
In this post, I will do a walk-through of how to download and use the new model and how it compares to other code generating models like GPT-4.
As usual, the best way to run the inference on any model locally is to run Ollama.
It’s been a while since I last shared an update on my Reimagine journey that started three months ago. Initially, I used to share my progress every week and it helped me build momentum during the early stages. But after some time, this weekly ritual became tedious - sometimes taking up an entire day for a single post. So I decided to share updates based on significant milestones rather than tying them to specific points in time.