Update README.md
Browse files
README.md
CHANGED
|
@@ -98,6 +98,8 @@ The model's evaluation metrics are available on the [MTEB Leaderboard](https://h
|
|
| 98 |
- The model also is No 1. by a far margin on the [SemRel24STS](https://huggingface.co/datasets/SemRel/SemRel2024) task with an accuracy of 81.12% beating Google Gemini embedding model (second place) 73.14% (as at 30th March 2025). SemRel24STS evaluates the ability of systems to measure the semantic relatedness between two sentences over 14 different languages.
|
| 99 |
- We noticed the model does exceptionally well with legal and news retrieval and similarity task from the MTEB leaderboard
|
| 100 |
|
|
|
|
|
|
|
| 101 |
|
| 102 |
### Strengths
|
| 103 |
- Excellent at understanding conversational and natural language queries
|
|
|
|
| 98 |
- The model also is No 1. by a far margin on the [SemRel24STS](https://huggingface.co/datasets/SemRel/SemRel2024) task with an accuracy of 81.12% beating Google Gemini embedding model (second place) 73.14% (as at 30th March 2025). SemRel24STS evaluates the ability of systems to measure the semantic relatedness between two sentences over 14 different languages.
|
| 99 |
- We noticed the model does exceptionally well with legal and news retrieval and similarity task from the MTEB leaderboard
|
| 100 |
|
| 101 |
+

|
| 102 |
+
|
| 103 |
|
| 104 |
### Strengths
|
| 105 |
- Excellent at understanding conversational and natural language queries
|