It’s been a few months since OpenAI announced GPT-4 Turbo with Vision a model capable of understanding images and answering questions based on visual input. Recently, I decided to leverage this in a real app and got valuable insights. This post is a quick summary of my learnings from that experience.
We’ll explore how to use the model through ChatGPT and the Open AI API so that you can can integrate it into any application. I’ll wrap up with my observations from using this model in practice.
Of late, I have been diving into Next.js and absolutely loving it. I wanted to put my learnings to practice, so I started building an app to solve a personal pain point. Along the way, I wanted to add authentication for my app and decided to use NextAuth, the go-to auth library for Next.js. Overall, it was a great experience, with a few humps along the way, but in the end, it all worked out well. In this post, I’ll be sharing those experiences and learnings.
Today, Meta AI announced they are releasing a new model Code Llama 70B, a higher performing LLM to generate code. This was exciting news and we have to try it out immediately.
In this post, I will do a walk-through of how to download and use the new model and how it compares to other code generating models like GPT-4.
As usual, the best way to run the inference on any model locally is to run Ollama. So let’s set up Ollama first.
It’s been a while since I last shared an update on my Reimagine journey that started three months ago. Initially, I used to share my progress every week and it helped me build momentum during the early stages. But after some time, this weekly ritual became tedious - sometimes taking up an entire day for a single post. So I decided to share updates based on significant milestones rather than tying them to specific points in time.
Today is a beautiful day, just two more days to end the beautiful year of 2023 - a pivotal year of my career and life.
Reflecting back on the year, I am grateful and happy that I was able to experience the year in good health in the midst of wonderful people - my family, friends, and my team/colleagues at Intuit. Now is the perfect time to say goodbye to the year and share the invaluable lessons I have learned along the way.
Welcome to Week #5 review of my remagine journey. The previous week’s reviews and the full series can be found at new-beginning tag collection.
What I was able to do this week
Yes, it is week #4 of my remagine journey and it’s time for the review. I couldn’t post it last week because of reasons I will explain below. The previous week’s reviews and the full series can be found at new-beginning tag collection.
What I was able to do this week
Ok, here is the honest, bitter, truth - this week (week #4) was super rough for me - in terms of learning and reaching the goals I had set out at the beginning of the week.
This is the fast follow (Part 2) of the previous post OpenAI DevDay 2023 - Highlights (aka Part 1) where I shared the highlights and the announcements made at OpenAI DevDay 2023.
In this post right here, I will share my observations - about the event and OpenAI tech - and my learnings - from talking to people during the event and by trying out the tech hands-on. So, let’s jump in!
This week I attended the Open AI Dev Day on 6 Nov 2023 at SVN West, San Francisco. This event was special in many ways - for me, this is the first conference I attended since the pandemic and the first one during my new journey; for OpenAI, this is their first developer conference ever! So I think it deserves a dedicated blog post to share the highlights, key takeaways and my observations, right? Right!
This post has been sitting in my draft for two weeks, waiting for my time to polish and post. But then as usual, I never got to it and it just stayed there. Anyway, now I am gonna just publish. So if you see typos or mistakes, please lmk. Progress, not perfection, yeah?
It’s time to review Week #3 of my reimagine journey. The previous week’s reviews and the full series be found at new-beginning tag collection.