Update README.md
Browse files
README.md
CHANGED
|
@@ -14,9 +14,10 @@ library_name: transformers
|
|
| 14 |
pipeline_tag: text-generation
|
| 15 |
---
|
| 16 |
|
| 17 |
-
#
|
| 18 |
|
| 19 |
-
|
|
|
|
| 20 |
|
| 21 |
**Model Developer**: Qwerky AI
|
| 22 |
|
|
|
|
| 14 |
pipeline_tag: text-generation
|
| 15 |
---
|
| 16 |
|
| 17 |
+
# Qwerky Optimized Llama3.1 Mamba Hybrid - 8B Instruct
|
| 18 |
|
| 19 |
+
|
| 20 |
+
This is a hybrid Mamba-Transformer model based on the Llama 3.1 architecture, distilled from Llama 3.3 70B into a 8B parameter model using Qwerky's proprietary distillation method. The model uses MAMBA layers interleaved with attention layers for efficient sequence modeling. The results are a 8B parameter model comparable in quality to Llama's 3.1 8B but running at speeds as fast or faster than Llama's 3.2 3B model.
|
| 21 |
|
| 22 |
**Model Developer**: Qwerky AI
|
| 23 |
|