Instructions to use Tavernari/git-commit-message with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Tavernari/git-commit-message with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="Tavernari/git-commit-message") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Tavernari/git-commit-message", dtype="auto") - llama-cpp-python
How to use Tavernari/git-commit-message with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Tavernari/git-commit-message", filename="unsloth.F16.gguf", )
llm.create_chat_completion( messages = "\"The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.\"" )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Tavernari/git-commit-message with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Tavernari/git-commit-message:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Tavernari/git-commit-message:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Tavernari/git-commit-message:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Tavernari/git-commit-message:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Tavernari/git-commit-message:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf Tavernari/git-commit-message:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Tavernari/git-commit-message:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf Tavernari/git-commit-message:Q4_K_M
Use Docker
docker model run hf.co/Tavernari/git-commit-message:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use Tavernari/git-commit-message with Ollama:
ollama run hf.co/Tavernari/git-commit-message:Q4_K_M
- Unsloth Studio new
How to use Tavernari/git-commit-message with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Tavernari/git-commit-message to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Tavernari/git-commit-message to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Tavernari/git-commit-message to start chatting
- Pi new
How to use Tavernari/git-commit-message with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Tavernari/git-commit-message:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Tavernari/git-commit-message:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Tavernari/git-commit-message with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Tavernari/git-commit-message:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Tavernari/git-commit-message:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use Tavernari/git-commit-message with Docker Model Runner:
docker model run hf.co/Tavernari/git-commit-message:Q4_K_M
- Lemonade
How to use Tavernari/git-commit-message with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Tavernari/git-commit-message:Q4_K_M
Run and chat with the model
lemonade run user.git-commit-message-Q4_K_M
List all available models
lemonade list
Configure the model in Pi
# Install Pi:
npm install -g @mariozechner/pi-coding-agent# Add to ~/.pi/agent/models.json:
{
"providers": {
"llama-cpp": {
"baseUrl": "http://localhost:8080/v1",
"api": "openai-completions",
"apiKey": "none",
"models": [
{
"id": "Tavernari/git-commit-message:"
}
]
}
}
}Run Pi
# Start Pi in your project directory:
piGit Commit Message Generator (Reasoning Version)
Welcome to the Git Commit Message Generator—an AI-powered tool that turns your code diffs into clear, meaningful commit messages with detailed reasoning. Hosted on Hugging Face, this reasoning version not only provides a commit message but also explains the thought process behind it, making your commit history more transparent and informative.
Note: This is the reasoning version of the Git Commit Message Generator. For other versions (e.g., Chain of Draft or standard), check out my Ollama repository for more details.
Beta Status: This version is in beta. Your feedback is essential to refine it further!
Output Format
The model outputs two main components wrapped in a structured format:
- Reasoning: A detailed breakdown of the changes in your diff and the logic behind the commit message.
- Commit Message: A concise summary of the changes, followed by a more detailed description.
Example Output:
<reasoning>
The diff shows changes in `WalletManagerView.swift`. The most significant changes are the
addition of a new `@State` variable `monthlySpentGoal` and modifications to existing functions
and state variables. The `monthlySpentGoal` is now used in various places, such as in the
`onSave` closure and in the `hasChanges` function. The purpose of these changes is to add a new
feature that allows users to set a monthly spending goal for their wallets, which can be
updated and saved with other wallet information.
The changes include:
- Adding a new `@State` variable `monthlySpentGoal`.
- Updating the `onSave` closure to include `monthlySpentGoal`.
- Using `monthlySpentGoal` in the `hasChanges` function to determine if the wallet's spending
goal has changed.
- Adding a new section `WalletGoalSpentSection` that uses `monthlySpentGoal`.
- Updating the `onSetAsMain` function to use `monthlySpentGoal`.
</reasoning>
Add monthly spending goal feature to WalletManagerView
This commit adds a new `@State` variable `monthlySpentGoal` to the `WalletManagerView` and
updates the `onSave` closure and related functions to include and use this variable. This
feature allows users to set and save a monthly spending goal for their wallets, enhancing the
wallet management functionality.
This format gives you both the "why" and the "what" of your commit, enhancing clarity and context.
Using the Model
You can interact with the model in two ways:
1. Web Interface
- Go to the Hugging Face Model Page.
- Paste your git diff into the input box.
- Click "Generate" to get the reasoning and commit message.
2. API Integration
- Use the Hugging Face Inference API to integrate the model into your workflows.
- Example in Python:
import requests
API_URL = "https://api-inference.huggingface.co/models/Tavernari/git-commit-message"
headers = {"Authorization": "Bearer YOUR_HF_TOKEN"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
diff = """
diff --git a/file1.py b/file1.py
index 83db48f..bf2a9a2 100644
--- a/file1.py
+++ b/file1.py
@@ -1,3 +1,4 @@
def hello():
print("Hello, world!")
+ print("Welcome to AI commit messages!")
"""
output = query({"inputs": diff})
print(output)
- Replace
YOUR_HF_TOKENwith your Hugging Face API token. The response will include both reasoning and the commit message.
Tips for Best Results
- Clear Diffs: Use small, focused diffs for more accurate messages.
- Proper Formatting: Ensure your diff is well-formatted for the model to interpret it correctly.
- Output Handling: When using the API, parse the response to separate reasoning and the commit message if needed.
Installing git-gen-commit (Optional)
For a command-line experience, you can install the git-gen-commit script, which generates commit messages from your git diff.
Disclaimer: The git-gen-commit script uses the Ollama API, not the Hugging Face model. Results may differ from this reasoning version. For more details, visit my Ollama repository.
Installation (macOS/Linux)
Run this command to install git-gen-commit globally:
sudo sh -c 'curl -L https://gist.githubusercontent.com/Tavernari/b88680e71c281cfcdd38f46bdb164fee/raw/git-gen-commit \
-o /usr/local/bin/git-gen-commit && chmod +x /usr/local/bin/git-gen-commit'
Usage
Once installed, run:
git gen-commit
This will analyze your current git diff and generate a commit message via the Ollama API.
Feedback and Contributions
This is a community-driven project, and your input helps it grow!
- Feedback: Use the community tab to give us feedback.
- Support: If you’d like to fuel this passion project, consider a donation: Buy me a coffee ☕️.
Disclaimer
This tool is still evolving. Please review generated messages for accuracy before committing.
Get in Touch
I’d love to hear from you! Connect with me at:
Let’s make AI-powered development even better together!
- Downloads last month
- 690
4-bit
5-bit
8-bit
16-bit
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp# Start a local OpenAI-compatible server: llama-server -hf Tavernari/git-commit-message: