<?xml version="1.0" encoding="utf-8"?><?xml-stylesheet type="text/xsl" href="rss.xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Syntactic Blog</title>
        <link>https://syntacticdigital.tech/public/blog</link>
        <description>Syntactic Blog</description>
        <lastBuildDate>Thu, 04 Dec 2025 00:00:00 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <item>
            <title><![CDATA[Converting a Gemini LoRa]]></title>
            <link>https://syntacticdigital.tech/public/blog/converting-gemini</link>
            <guid>https://syntacticdigital.tech/public/blog/converting-gemini</guid>
            <pubDate>Thu, 04 Dec 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Introduction]]></description>
            <content:encoded><![CDATA[<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="introduction">Introduction<a href="https://syntacticdigital.tech/public/blog/converting-gemini#introduction" class="hash-link" aria-label="Direct link to Introduction" title="Direct link to Introduction" translate="no">​</a></h2>
<p>In this tutorial we will be converting an existing Gemma LLM LoRa file from HuggingFace to a LoRa file compatible with Google Gemini for use with the Syntattic LLM Unreal plugin.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="prerequistes">Prerequistes<a href="https://syntacticdigital.tech/public/blog/converting-gemini#prerequistes" class="hash-link" aria-label="Direct link to Prerequistes" title="Direct link to Prerequistes" translate="no">​</a></h2>
<p>Converting an existing LoRa model to Tensorflow lite format requires a Python development environment, in this example we will be running a Jupyter file using Google Colab. You will also need a Huggingface account and be willing to connect the Colab notebook to your Google drive.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="training-a-lora">Training a LoRa<a href="https://syntacticdigital.tech/public/blog/converting-gemini#training-a-lora" class="hash-link" aria-label="Direct link to Training a LoRa" title="Direct link to Training a LoRa" translate="no">​</a></h2>
<p>To train the LoRa we are going to use a  <a href="https://colab.research.google.com/drive/15OjHRRjK8L89nS4HBqj_pRCs77pvhpeq?usp=sharing">modified version </a> of the Colab notebook that can be found <a href="https://colab.research.google.com/drive/1BiKgZtLJ1H4E6OEPZui2tooddi8K4NW0#scrollTo=5yYF61P0xseL">here</a>. The modified version is stripped down to just the code that is needed to convert a Google gemma LoRa. Each block of code contains detailed instructions on how to customise the script to create an arbitrary LoRa.</p>
<p>You can compare against the original file to make the modifications to train a LoRa in the “Falcon 1B”, “StableLM 3B” and “Phi 2” formats. The Jupyter notebook contains comments that explain what to modify in each section.</p>
<p>The Colab notebook has three sections which do the following :</p>
<ul>
<li class="">Imports the required dependencies.</li>
<li class="">Downloads the required files for the gemini model and the LoRa we want to apply and builds a simple UI to enter your Huggingface credentials and a button to start the conversion to Tensorflow lite format.</li>
<li class="">Saves the converted files to your Google Drive.</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="using-the-lora-with-unreal">Using the LoRa with Unreal<a href="https://syntacticdigital.tech/public/blog/converting-gemini#using-the-lora-with-unreal" class="hash-link" aria-label="Direct link to Using the LoRa with Unreal" title="Direct link to Using the LoRa with Unreal" translate="no">​</a></h2>
<p>To use the converted LoRa files with Unreal :</p>
<ul>
<li class="">Download the LLM and LoRa file from your Google Drive.</li>
<li class="">If the LLM + LoRa file size is over 2GB<!-- -->
<ul>
<li class="">Place the LLM on a publicly accessible server.</li>
<li class="">Set the “LLM download location” to the web address of the file location on the server.</li>
</ul>
</li>
<li class="">If the LLM + LoRa file size is less than 2GB<!-- -->
<ul>
<li class="">Select “Package” as the delivery method in the project settings.</li>
<li class="">Press the “LLM model” field in the project settings and select the LLM file.</li>
<li class="">Change the LoRa dropdown to “Enabled” .</li>
<li class="">Press the “LoRa model” field in the project settings and select the LoRA file.</li>
<li class="">Further information, including troubleshooting and tips can be found <a href="https://syntacticdigital.tech/public/llm-for-android/index.html">here</a>.</li>
</ul>
</li>
</ul>]]></content:encoded>
            <category>LLM</category>
            <category>Customisation</category>
        </item>
    </channel>
</rss>