Add missing Softmax activation memnn. (#3706)

The implementation of bAbi [End to End Memory
Network](https://arxiv.org/pdf/1503.08895v5.pdf) in the example
seems to be missing the Softmax Layer.

Quoting the paper.

> The query q is also embedded (again, in the simplest case via another embedding matrix
B with the same dimensions as A) to obtain an internal state u. In the embedding space, we compute
the match between u and each memory m<sub>i</sub> by taking the inner product followed by a softmax.

Also, the question encoder
[here](0df0177437/examples/babi_memnn.py (L186)) seems to sum over the probabilities and the question vector as suggestted in the original paper.

> Output memory representation: Each x<sub>i</sub> has a corresponding output vector c<sub>i</sub> (given in the
simplest case by another embedding matrix C). The response vector from the memory o is then a
sum over the transformed inputs c<sub>i</sub> , weighted by the probability vector from the input.

I tried running the model(with and without the intermediate softmax)
against _Single Supporting Fact_ en-10k dataset and found that the
network the intermediate softmax trained a lot faster(95% at 100epoch) than the former(67% at 100epoch).

Network without the Softmax activation in the Input Memory Representation at epoch=100
======================================================================================
```
Iteration 10
Train on 10000 samples, validate on 1000 samples
Epoch 1/10
10000/10000 [==============================] - 8s - loss: 0.0549 - acc: 0.9819 - val_loss: 1.8088 - val_acc: 0.6470
Epoch 2/10
10000/10000 [==============================] - 6s - loss: 0.0612 - acc: 0.9802 - val_loss: 1.7839 - val_acc: 0.6650
Epoch 3/10
10000/10000 [==============================] - 6s - loss: 0.0542 - acc: 0.9812 - val_loss: 1.7595 - val_acc: 0.6750
Epoch 4/10
10000/10000 [==============================] - 6s - loss: 0.0538 - acc: 0.9826 - val_loss: 1.8198 - val_acc: 0.6670
Epoch 5/10
10000/10000 [==============================] - 6s - loss: 0.0590 - acc: 0.9790 - val_loss: 1.7891 - val_acc: 0.6650
Epoch 6/10
10000/10000 [==============================] - 6s - loss: 0.0548 - acc: 0.9803 - val_loss: 1.7682 - val_acc: 0.6790
Epoch 7/10
10000/10000 [==============================] - 6s - loss: 0.0455 - acc: 0.9841 - val_loss: 1.8394 - val_acc: 0.6730
Epoch 8/10
10000/10000 [==============================] - 6s - loss: 0.0559 - acc: 0.9797 - val_loss: 1.7764 - val_acc: 0.6650
Epoch 9/10
10000/10000 [==============================] - 6s - loss: 0.0488 - acc: 0.9835 - val_loss: 1.7711 - val_acc: 0.6620
Epoch 10/10
10000/10000 [==============================] - 6s - loss: 0.0502 - acc: 0.9834 - val_loss: 1.8225 - val_acc: 0.6700
```

Network with Softmax Activation in the Input Memory Representation at epoch=100
===============================================================================

```
Iteration 10
Train on 10000 samples, validate on 1000 samples
Epoch 1/10
10000/10000 [==============================] - 6s - loss: 0.0084 - acc: 0.9972 - val_loss: 0.2426 - val_acc: 0.9520
Epoch 2/10
10000/10000 [==============================] - 7s - loss: 0.0152 - acc: 0.9946 - val_loss: 0.2063 - val_acc: 0.9560
Epoch 3/10
10000/10000 [==============================] - 6s - loss: 0.0104 - acc: 0.9969 - val_loss: 0.2010 - val_acc: 0.9540
Epoch 4/10
10000/10000 [==============================] - 6s - loss: 0.0163 - acc: 0.9959 - val_loss: 0.2023 - val_acc: 0.9580
Epoch 5/10
10000/10000 [==============================] - 6s - loss: 0.0136 - acc: 0.9962 - val_loss: 0.2007 - val_acc: 0.9560
Epoch 6/10
10000/10000 [==============================] - 6s - loss: 0.0152 - acc: 0.9953 - val_loss: 0.1989 - val_acc: 0.9570
Epoch 7/10
10000/10000 [==============================] - 7s - loss: 0.0085 - acc: 0.9969 - val_loss: 0.2113 - val_acc: 0.9490
Epoch 8/10
10000/10000 [==============================] - 7s - loss: 0.0116 - acc: 0.9972 - val_loss: 0.2346 - val_acc: 0.9500
Epoch 9/10
10000/10000 [==============================] - 7s - loss: 0.0106 - acc: 0.9970 - val_loss: 0.2052 - val_acc: 0.9550
Epoch 10/10
10000/10000 [==============================] - 7s - loss: 0.0132 - acc: 0.9963 - val_loss: 0.2114 - val_acc: 0.9500
```
This commit is contained in:
Abishek Bhat 2016-09-07 00:03:11 +05:30 committed by François Chollet
parent 0df0177437
commit b8fddc862e

@ -173,6 +173,7 @@ match = Sequential()
match.add(Merge([input_encoder_m, question_encoder],
mode='dot',
dot_axes=[2, 2]))
match.add(Activation('softmax'))
# output: (samples, story_maxlen, query_maxlen)
# embed the input into a single vector with size = story_maxlen:
input_encoder_c = Sequential()