Skip to content

Commit 4e33af3

Browse files
authored
Typos in README.md
1 parent 215804e commit 4e33af3

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ The VQA community developped an approach based on four learnable components:
6565
<img src="https://rg.gosu.cc/Cadene/vqa.pytorch/master/doc/mutan.png" width="400"/>
6666
</p>
6767

68-
One of our claim is that the multimodal fusion between the image and the question representations is a critical component. Thus, our proposed model uses a Tucker Decomposition of the correlation Tensor to model reacher multimodal interactions in order to provide proper answers. Our best model is based on :
68+
One of our claim is that the multimodal fusion between the image and the question representations is a critical component. Thus, our proposed model uses a Tucker Decomposition of the correlation Tensor to model richer multimodal interactions in order to provide proper answers. Our best model is based on :
6969

7070
- a pretrained Skipthoughts for the question model,
7171
- features from a pretrained Resnet-152 (with images of size 3x448x448) for the image model,
@@ -206,7 +206,7 @@ We plan to add:
206206
We currently provide four models:
207207

208208
- MLBNoAtt: a strong baseline (BayesianGRU + Element-wise product)
209-
- [MLBAtt](https://arxiv.org/abs/1610.04325): the previous state-of-the-art which add an attention strategy
209+
- [MLBAtt](https://arxiv.org/abs/1610.04325): the previous state-of-the-art which adds an attention strategy
210210
- MutanNoAtt: our proof of concept (BayesianGRU + Mutan Fusion)
211211
- MutanAtt: the current state-of-the-art
212212

@@ -341,4 +341,4 @@ Please cite the arXiv paper if you use Mutan in your work:
341341

342342
## Acknowledgment
343343

344-
Special thanks to the authors of [MLB](https://arxiv.org/abs/1610.04325) for providing some [Torch7 code](https://github.com/jnhwkim/MulLowBiVQA), [MCB](https://arxiv.org/abs/1606.01847) for providing some [Caffe code](https://github.com/akirafukui/vqa-mcb), and our professors and friends from LIP6 for the perfect working atmosphere.
344+
Special thanks to the authors of [MLB](https://arxiv.org/abs/1610.04325) for providing some [Torch7 code](https://github.com/jnhwkim/MulLowBiVQA), [MCB](https://arxiv.org/abs/1606.01847) for providing some [Caffe code](https://github.com/akirafukui/vqa-mcb), and our professors and friends from LIP6 for the perfect working atmosphere.

0 commit comments

Comments
 (0)