Commit Graph

7 Commits

Author SHA1 Message Date
Hiroya Chiba
65ce238f03 skip newsgroup header (#5585) 2017-03-03 17:45:54 -08:00
Francois Chollet
5d0cb10949 Make utils globally importable & update examples. 2017-02-28 14:41:30 -08:00
Francois Chollet
10bfb1c565 Integration tests passing. 2017-02-14 16:08:30 -08:00
Minkoo Seo
f1a95869eb Fix off by one error in WE example script
Tokenizer returns sequence values in the range of [0, nb_words). In this
example, MAX_NB_WORDS is 20000 and the data's min value is 19999. There
is no need to use 'nb_words + 1'.
2017-02-06 10:53:33 -08:00
Yad Faeq
98b289630a Word embdedding example updated (#3417)
* Added Convolution1D instead of Conv1D, which is depreceated

* updated rest of the example using Conv1D

* Python3 fails to decode utf-8 data, thus using encoding='latin-1'

* added condition for Encoding line 65-67

* Conv1D reverted back to the way it was
2016-08-08 10:59:31 -07:00
dolaameng
f221ef952f make examples/pretrained_word_embeddings.py more memory efficient (#3289)
* make examples/pretrained_word_embeddings.py more memory efficient

* make examples/pretrained_word_embeddings.py more memory efficient

* rename NB_WORDS to nb_words as it is not a global constant
2016-07-23 10:14:28 -07:00
Francois Chollet
0d5289141e Add pre-trained word embeddings example 2016-07-16 17:51:17 -07:00