Skip to content

Commit 0ad5875

Browse files
authored
Spelling (#201)
* spelling: ; otherwise, Signed-off-by: Josh Soref <[email protected]> * spelling: an Signed-off-by: Josh Soref <[email protected]> * spelling: automate Signed-off-by: Josh Soref <[email protected]> * spelling: automated Signed-off-by: Josh Soref <[email protected]> * spelling: consequent Signed-off-by: Josh Soref <[email protected]> * spelling: converted Signed-off-by: Josh Soref <[email protected]> * spelling: correlation Signed-off-by: Josh Soref <[email protected]> * spelling: greater Signed-off-by: Josh Soref <[email protected]> * spelling: is the Signed-off-by: Josh Soref <[email protected]> * spelling: library Signed-off-by: Josh Soref <[email protected]> * spelling: multi Signed-off-by: Josh Soref <[email protected]> * spelling: ordinary Signed-off-by: Josh Soref <[email protected]> * spelling: selection Signed-off-by: Josh Soref <[email protected]> * spelling: terms Signed-off-by: Josh Soref <[email protected]> * spelling: vacant Signed-off-by: Josh Soref <[email protected]> --------- Signed-off-by: Josh Soref <[email protected]>
1 parent b9c9d8a commit 0ad5875

File tree

10 files changed

+25
-25
lines changed

10 files changed

+25
-25
lines changed

doc/multioutput.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ MIMO (Multi-Input Multi-Output) data. For classification, it can be used for
1212
multilabel data. Actually, for multiclass classification, which has one output with
1313
multiple categories, multioutput feature selection can also be useful. The multiclass
1414
classification can be converted to multilabel classification by one-hot encoding
15-
target ``y``. The canonical correaltion coefficient between the features ``X`` and the
15+
target ``y``. The canonical correlation coefficient between the features ``X`` and the
1616
one-hot encoded target ``y`` has equivalent relationship with Fisher's criterion in
1717
LDA (Linear Discriminant Analysis) [1]_. Applying :class:`FastCan` to the converted
1818
multioutput data may result in better accuracy in the following classification task
@@ -23,7 +23,7 @@ Relationship on multiclass data
2323
Assume the feature matrix is :math:`X \in \mathbb{R}^{N\times n}`, the multiclass
2424
target vector is :math:`y \in \mathbb{R}^{N\times 1}`, and the one-hot encoded target
2525
matrix is :math:`Y \in \mathbb{R}^{N\times m}`. Then, the Fisher's criterion for
26-
:math:`X` and :math:`y` is denoted as :math:`J` and the canonical correaltion
26+
:math:`X` and :math:`y` is denoted as :math:`J` and the canonical correlation
2727
coefficient between :math:`X` and :math:`Y` is denoted as :math:`R`. The relationship
2828
between :math:`J` and :math:`R` is given by
2929

@@ -36,7 +36,7 @@ or
3636
R^2 = \frac{J}{1+J}
3737
3838
It should be noted that the number of the Fisher's criterion and the canonical
39-
correaltion coefficient is not only one. The number of the non-zero canonical
39+
correlation coefficient is not only one. The number of the non-zero canonical
4040
correlation coefficients is no more than :math:`\min (n, m)`, and each canonical correlation
4141
coefficient is one-to-one correspondence to each Fisher's criterion.
4242

doc/ols_and_omp.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ it the following advantages over OLS and OMP:
3939
and/or added some constants, the selection result given by :class:`FastCan` will be
4040
unchanged. See :ref:`sphx_glr_auto_examples_plot_affinity.py`.
4141
* Multioutput: as :class:`FastCan` use canonical correlation for feature ranking, it is
42-
naturally support feature seleciton on dataset with multioutput.
42+
naturally support feature selection on dataset with multioutput.
4343

4444

4545
.. rubric:: References

doc/pruning.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ by sparse linear combinations of the atoms.
1616
We use these atoms as the target :math:`Y` and select samples based on their correlation with :math:`Y`.
1717

1818
One challenge to use :class:`FastCan` for data pruning is that the number to select is much larger than feature selection.
19-
Normally, this number is higher than the number of features, which will make the pruned data matrix singular.
19+
Normally, this number is greater than the number of features, which will make the pruned data matrix singular.
2020
In other words, :class:`FastCan` will easily think the pruned data is redundant and no additional sample
2121
should be selected, as any additional samples can be represented by linear combinations of the selected samples.
2222
Therefore, the number to select has to be set to small.

examples/plot_fisher.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
66
.. currentmodule:: fastcan
77
8-
In this examples, we will demonstrate the canonical correaltion coefficient
8+
In this examples, we will demonstrate the canonical correlation coefficient
99
between the features ``X`` and the one-hot encoded target ``y`` has equivalent
1010
relationship with Fisher's criterion in LDA (Linear Discriminant Analysis).
1111
"""
@@ -17,14 +17,14 @@
1717
# Prepare data
1818
# ------------
1919
# We use ``iris`` dataset and transform this multiclass data to multilabel data by
20-
# one-hot encoding. Here, drop="first" is necessary, otherwise, the transformed target
20+
# one-hot encoding. Here, drop="first" is necessary; otherwise, the transformed target
2121
# is not full column rank.
2222

2323
from sklearn import datasets
2424
from sklearn.preprocessing import OneHotEncoder
2525

2626
X, y = datasets.load_iris(return_X_y=True)
27-
# drop="first" is necessary, otherwise, the transformed target is not full column rank
27+
# drop="first" is necessary; otherwise, the transformed target is not full column rank
2828
y_enc = OneHotEncoder(
2929
drop="first",
3030
sparse_output=False,

examples/plot_forecasting.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
88
In this examples, we will demonstrate how to use :func:`make_narx` to build (nonlinear)
99
AutoRegressive (AR) models for time-series forecasting.
10-
The time series used isthe monthly average atmospheric CO2 concentrations
10+
The time series used is the monthly average atmospheric CO2 concentrations
1111
from 1958 and 2001.
1212
The objective is to forecast the CO2 concentration till nowadays with
1313
initial 18 months data.
@@ -94,7 +94,7 @@
9494
# Nonlinear AR model
9595
# ------------------
9696
# We can use :func:`make_narx` to easily build a nonlinear AR model, which does not
97-
# has a input. Therefore, the input ``X`` is set as ``None``.
97+
# has an input. Therefore, the input ``X`` is set as ``None``.
9898
# :func:`make_narx` will search 10 polynomial terms, whose maximum degree is 2 and
9999
# maximum delay is 9.
100100

examples/plot_narx.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@
4747
X = np.c_[u0[max_delay:], u1[max_delay:]]
4848

4949
# %%
50-
# Build term libriary
50+
# Build term library
5151
# -------------------
5252
# To build a reduced polynomial NARX model, it is normally have two steps:
5353
#
@@ -56,14 +56,14 @@
5656
#
5757
# #. Learn the coefficients of the terms.
5858
#
59-
# To search the structure of the model, the candidate term libriary should be
59+
# To search the structure of the model, the candidate term library should be
6060
# constructed by the following two steps.
6161
#
6262
# #. Time-shifted variables: the raw input-output data, i.e., :math:`u_0(k)`,
6363
# :math:`u_1(k)`, and :math:`y(k)`, are converted into :math:`u_0(k-1)`,
6464
# :math:`u_1(k-2)`, etc.
6565
#
66-
# #. Nonlinear terms: the time-shifted variables are onverted to nonlinear terms
66+
# #. Nonlinear terms: the time-shifted variables are converted to nonlinear terms
6767
# via polynomial basis functions, e.g., :math:`u_0(k-1)^2`,
6868
# :math:`u_0(k-1)u_0(k-3)`, etc.
6969
#
@@ -124,8 +124,8 @@
124124
# %%
125125
# Build NARX model
126126
# ----------------
127-
# As the reduced polynomial NARX is a linear function of the nonlinear tems,
128-
# the coefficient of each term can be easily estimated by oridnary least squares.
127+
# As the reduced polynomial NARX is a linear function of the nonlinear terms,
128+
# the coefficient of each term can be easily estimated by ordinary least squares.
129129
# In the printed NARX model, it is found that :class:`FastCan` selects the correct
130130
# terms and the coefficients are close to the true values.
131131

@@ -143,9 +143,9 @@
143143

144144
print_narx(narx_model)
145145
# %%
146-
# Automaticated NARX modelling workflow
146+
# Automated NARX modelling workflow
147147
# -------------------------------------
148-
# We provide :meth:`narx.make_narx` to automaticate the workflow above.
148+
# We provide :meth:`narx.make_narx` to automate the workflow above.
149149

150150
from fastcan.narx import make_narx
151151

examples/plot_narx_multi.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
"""
22
=======================
3-
Mulit-output NARX model
3+
Multi-output NARX model
44
=======================
55
66
.. currentmodule:: fastcan
@@ -64,7 +64,7 @@
6464

6565

6666
# %%
67-
# Identify the mulit-output NARX model
67+
# Identify the multi-output NARX model
6868
# ------------------------------------
6969
# We provide :meth:`narx.make_narx` to automatically find the model
7070
# structure. `n_terms_to_select` can be a list to indicate the number

fastcan/_refine.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -38,10 +38,10 @@ def refine(selector, drop=1, max_iter=None, verbose=1):
3838
In the refining process, the selected features will be dropped, and
3939
the vacancy positions will be refilled from the candidate features.
4040
41-
The processing of a vacany position is refilled after searching all
41+
The processing of a vacant position is refilled after searching all
4242
candidate features is called an `iteration`.
4343
44-
The processing of a vacany position is refilled by a different features
44+
The processing of a vacant position is refilled by a different features
4545
from the dropped one, which increase the SSC of the selected features
4646
is called a `valid iteration`.
4747
@@ -51,7 +51,7 @@ def refine(selector, drop=1, max_iter=None, verbose=1):
5151
FastCan selector.
5252
5353
drop : int or array-like of shape (n_drops,) or "all", default=1
54-
The number of the selected features dropped for the consequencing
54+
The number of the selected features dropped for the consequent
5555
reselection.
5656
5757
max_iter : int, default=None

fastcan/narx/_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@ def make_narx(
217217
The verbosity level of refine.
218218
219219
refine_drop : int or "all", default=None
220-
The number of the selected features dropped for the consequencing
220+
The number of the selected features dropped for the consequent
221221
reselection. If `drop` is None, no refining will be performed.
222222
223223
refine_max_iter : int, default=None

fastcan/narx/tests/test_narx.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -263,7 +263,7 @@ def make_data(multi_output, nan, rng):
263263
).fit(X, y)
264264

265265

266-
def test_mulit_output_warn():
266+
def test_multi_output_warn():
267267
X = np.random.rand(10, 2)
268268
y = np.random.rand(10, 2)
269269
for i in range(2):
@@ -342,7 +342,7 @@ def test_fit_intercept():
342342
assert_array_equal(narx.intercept_, [0.0, 0.0])
343343

344344

345-
def test_mulit_output_error():
345+
def test_multi_output_error():
346346
X = np.random.rand(10, 2)
347347
y = np.random.rand(10, 2)
348348
time_shift_ids = np.array([[0, 1], [1, 1]])

0 commit comments

Comments
 (0)