<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>AI on Fabian G. Williams</title>
    <link>https://www.fabswill.com/tags/ai/</link>
    <description>Recent content in AI on Fabian G. Williams</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <lastBuildDate>Sun, 22 Dec 2024 00:00:00 +0000</lastBuildDate>
    
	<atom:link href="https://www.fabswill.com/tags/ai/index.xml" rel="self" type="application/rss+xml" />
    
    
    <item>
      <title>Mastering Llama 3.3 – A Deep Dive into Running Local LLMs</title>
      <link>https://www.fabswill.com/blog/masteringllama3dot370b/</link>
      <pubDate>Sun, 22 Dec 2024 00:00:00 +0000</pubDate>
      
      <guid>https://www.fabswill.com/blog/masteringllama3dot370b/</guid>
      <description>🚀 Introduction Over the holiday break, I decided to dive deep into Llama 3.3, running it on my MacBook Pro M3 Max (128GB RAM, 40-core GPU). What started as curiosity quickly turned into a full exploration of local AI models, Semantic Kernel, and API integrations using Microsoft Graph.
In this post, I’ll walk you through my setup, the performance differences between Llama 3.3 and other models like Llama 3.170B, and the practical lessons learned along the way.</description>
    </item>
    
  </channel>
</rss>