You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
which should take less than 5 minutes on a GPU. After training completes, it'll
97
-
start a LIT server on the development set; navigate to http://localhost:5432 for
98
-
the UI.
99
+
but you can switch to
100
+
[STS-B](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) or [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) using the toolbar or the gear icon in
101
+
the upper right.
102
+
99
103
100
104
### Quick start: language modeling
101
105
102
106
To explore predictions from a pretrained language model (BERT or GPT-2), run:
* Write a model wrapper which follows the [`Model` API](documentation/python_api.md#models)
135
137
* Pass models, datasets, and any additional
136
-
[components](docs/python_api.md#interpretation-components) to the LIT server
138
+
[components](documentation/python_api.md#interpretation-components) to the LIT server
137
139
class
138
140
139
141
For a full walkthrough, see
140
-
[adding models and data](docs/python_api.md#adding-models-and-data).
142
+
[adding models and data](documentation/python_api.md#adding-models-and-data).
141
143
142
144
## Extending LIT with new components
143
145
144
146
LIT is easy to extend with new interpretability components, generators, and
145
147
more, both on the frontend or the backend. See the
146
-
[developer guide](docs/development.md) to get started.
148
+
[developer guide](documentation/development.md) to get started.
147
149
148
150
## Citing LIT
149
151
150
-
If you use LIT as part of your work, please cite [our paper](https://arxiv.org/abs/2008.05122):
152
+
If you use LIT as part of your work, please cite [our EMNLP paper](https://arxiv.org/abs/2008.05122):
151
153
152
154
```
153
155
@misc{tenney2020language,
154
-
title={The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models},
156
+
title={The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for {NLP} Models},
155
157
author={Ian Tenney and James Wexler and Jasmijn Bastings and Tolga Bolukbasi and Andy Coenen and Sebastian Gehrmann and Ellen Jiang and Mahima Pushkarna and Carey Radebaugh and Emily Reif and Ann Yuan},
156
-
year={2020},
157
-
eprint={2008.05122},
158
-
archivePrefix={arXiv},
159
-
primaryClass={cs.CL}
158
+
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
159
+
year = "2020",
160
+
publisher = "Association for Computational Linguistics",
0 commit comments