Skip to content

Conversation

@voidful
Copy link

@voidful voidful commented Jan 5, 2020

This pull request fixing the issue that any of the ref is empty

When the number of ref is inconsistent, i will fill a empty string as padding. which causing an error.

scores = n.compute_metrics(ref_list=[
            [
                "this is one reference sentence for sentence1",
                ""
            ],
            [
                "this is one more reference sentence for sentence1",
                "this is the second reference sentence for sentence2"
            ],
        ],
            hyp_list=[
                "this is the model generated sentence1 which seems good enough",
                "this is sentence2 which has been generated by your model"
            ]
        )

@msftclas
Copy link

msftclas commented Jan 5, 2020

CLA assistant check
All CLA requirements met.

@juharris
Copy link
Contributor

juharris commented Jan 5, 2020

Thanks for pointing this out. The references are the targets that the generated hypothesis should match. It's possible that a target would indeed be an empty string so I think we should correct what is actually causing the error instead of silently ignoring empty strings which could mean that a hypothesis is compared with a target that it wasn't meant to be compared with. For example:

References:

  1. "Sentence 1"
  2. "" (this one would get filtered out)
  3. "Sentence 3"

Hypotheses:

  1. "Sentence 1"
  2. "Sentence 2"
  3. "Sentence 3"

@voidful
Copy link
Author

voidful commented Jan 6, 2020

I agree that we should correct what is actually causing the error
To clear this in more general way:
when one of the ref is empty or hyp is empty

ref=["this is a test",""],
hyp="this is a good test"
ref=["this is a good test"],
hyp=""

vectorize metric(Skip-thought/ glove_metrics) will cause error due to empty input to encode
so the following commit will try to correct it.

.gitignore Outdated
.idea/
venv/

# General
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you remove changes to your gitignore? They seem rather specific to your setup.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I can remove it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juharris Thank you for your advice. It is what I need.


scores = n.compute_individual_metrics(ref=["Chocolate and an Interview "],
hyp="Chocolate , a Healthy Food")
print(scores)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you assert something here, so that tests can detect regressions?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found that this case is not necessary. I will remove it

hypothesis = os.path.join(root_dir, 'examples/6.csv')
references = os.path.join(root_dir, 'examples/7.pt_beam_1.csv')
scores = nlgeval.compute_metrics(hypothesis, [references])
print(scores)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same, better to assert the result you expect.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found that this case is not necessary. I will remove it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants