Tokenizer returns sequence values in the range of [0, nb_words). In this
example, MAX_NB_WORDS is 20000 and the data's min value is 19999. There
is no need to use 'nb_words + 1'.
* Added Convolution1D instead of Conv1D, which is depreceated
* updated rest of the example using Conv1D
* Python3 fails to decode utf-8 data, thus using encoding='latin-1'
* added condition for Encoding line 65-67
* Conv1D reverted back to the way it was
* make examples/pretrained_word_embeddings.py more memory efficient
* make examples/pretrained_word_embeddings.py more memory efficient
* rename NB_WORDS to nb_words as it is not a global constant