You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
`detect` always returns a list of candidates ordered by score. Use `model="full"` for the best accuracy or `model="lite"` for an offline-only workflow.
74
74
75
-
### Reuse Configuration
75
+
### Custom Configuration
76
76
77
77
```python
78
78
from fast_langdetect import LangDetectConfig, LangDetector
Instantiate `LangDetector` when you want to reuse a model or share configuration between calls without re-downloading files.
86
+
Each `LangDetector` instance maintains its own in-memory model cache. Once loaded, models are reused for subsequent calls within the same instance. The global `detect()` function uses a shared default detector, so it also benefits from automatic caching.
87
+
88
+
Create a custom `LangDetector` instance when you need specific configuration (custom cache directory, input limits, etc.) or isolated model management.
87
89
88
90
#### 🌵 Fallback Policy
89
91
@@ -229,9 +231,11 @@ from fast_langdetect import LangDetectConfig, LangDetector
229
231
with resources.path("fast_langdetect.resources", "lid.176.ftz") as model_path:
When using a custom model via `custom_model_path`, the `model` parameter in `detect()` calls is ignored since your custom model file is always loaded directly. The `model="lite"`, `model="full"`, and `model="auto"` parameters only apply when using the built-in models.
0 commit comments