the small distributed language model toolkit
โก๏ธ fine-tune state-of-the-art LLMs anywhere, rapidly โก๏ธ
pip install llm-magnet
or
python3 setup.py install
check out the example notebooks
a snippet to get you started
from magnet.base import Magnet
from magnet.base import EmbeddedMagnet
cluster = EmbeddedMagnet()
cluster.start()
magnet = cluster.create_magnet()
await magnet.align()
config = {
"host": "127.0.0.1",
"credentials": None,
"domain": None,
"name": "my_stream",
"category": "my_category",
"kv_name": "my_kv",
"session": "my_session",
"os_name": "my_object_store",
"index": {
"milvus_uri": "127.0.0.1",
"milvus_port": 19530,
"milvus_user": "test",
"milvus_password": "test",
"dimension": 1024,
"model": "BAAI/bge-large-en-v1.5",
"name": "test",
"options": {
'metric_type': 'COSINE',
'index_type':'HNSW',
'params': {
"efConstruction": 40
, "M": 48
}
}
}
}
magnet = Magnet(config)
await magnet.align()
- โก๏ธ It's Fast
- fast on consumer hardware
- very fast on Apple Silicon
- extremely fast on ROCm/CUDA
- ๐ซต Automatic or your way
- rely on established transformer patterns to let
magnet
do the work - keep your existing data processing functions, bring them to
magnet
!
- rely on established transformer patterns to let
- ๐ฐ๏ธ 100% Distributed
- processing, embedding, storage, retrieval, querying, or inference from anywhere
- as much or as little compute as you need
- ๐งฎ Choose Inference Method
- HuggingFace
- vLLM node
- GPU
- mlx
- ๐ Huge Volumes
- handle gigantic amounts of data inexpensively
- fault-tolerant by design
- decentralized workloads
- ๐ Secure
- JWT
- Basic
- ๐ชต World-Class Comprehension
magnet
optionally logs its own code as it's executed (yes, really)- build a self-aware system and allow it to learn from itself
- emojis are the future