Skip to content

Commit 803a858

Browse files
committed
Merge branch 'fix/0915'
2 parents ac22c76 + b4dfc43 commit 803a858

File tree

1 file changed

+7
-3
lines changed

1 file changed

+7
-3
lines changed

README.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ print(detect("Hello 世界 こんにちは", model="auto", k=3))
7272

7373
`detect` always returns a list of candidates ordered by score. Use `model="full"` for the best accuracy or `model="lite"` for an offline-only workflow.
7474

75-
### Reuse Configuration
75+
### Custom Configuration
7676

7777
```python
7878
from fast_langdetect import LangDetectConfig, LangDetector
@@ -83,7 +83,9 @@ print(detector.detect("Bonjour", k=1))
8383
print(detector.detect("Hola", model="full", k=1))
8484
```
8585

86-
Instantiate `LangDetector` when you want to reuse a model or share configuration between calls without re-downloading files.
86+
Each `LangDetector` instance maintains its own in-memory model cache. Once loaded, models are reused for subsequent calls within the same instance. The global `detect()` function uses a shared default detector, so it also benefits from automatic caching.
87+
88+
Create a custom `LangDetector` instance when you need specific configuration (custom cache directory, input limits, etc.) or isolated model management.
8789

8890
#### 🌵 Fallback Policy
8991

@@ -229,9 +231,11 @@ from fast_langdetect import LangDetectConfig, LangDetector
229231
with resources.path("fast_langdetect.resources", "lid.176.ftz") as model_path:
230232
config = LangDetectConfig(custom_model_path=str(model_path))
231233
detector = LangDetector(config)
232-
print(detector.detect("Hello world", model="lite", k=1))
234+
print(detector.detect("Hello world", k=1))
233235
```
234236

237+
When using a custom model via `custom_model_path`, the `model` parameter in `detect()` calls is ignored since your custom model file is always loaded directly. The `model="lite"`, `model="full"`, and `model="auto"` parameters only apply when using the built-in models.
238+
235239
## Benchmark 📊
236240

237241
For detailed benchmark results, refer

0 commit comments

Comments
 (0)