导航

Three failed attempts of handling non-sequential data

Posted on 2017-07-04 16:33  MaHaLo  阅读(200)  评论(0编辑  收藏  举报

The Progress of Products Classification

Cause now we are considering to classify the product by two kinds of features, product images, and product title. I tried to handle these two kinds of features individually, on the product title side, I used Keras build a simple RNN model for classifying 10 classes product, and I got a good result, about 98% accuracy. I test the model with some products from our site, except the title is too ambiguous I can get a proper result, the model doesn't know how to handle some combined word, e.g. 'SmartWatch'. But I found that the product images are very clear, so I wonder if I could combine these two features it wouldn't be a big problem. you can see the watch at  , and my model recognized it as a motherboard. (sad)

On the other side, I want to build a model to classify the product images. Different from usual image classification problem, I'm going to make a classifier working on a set of images, for example, a Lenovo Laptop product would contain an image of Lenovo logo, the laptop's front and back photograph, and all images can in any order. So, I'm just doing a job with a set of non-sequential data.

Three failed attempts

1.Working on a single image and combine the result

I trained a usual classifier that accepts a single image, I wrote the model with Keras Vgg16 like before. Suppose we have 3 images, I pass each image to the model, and I got a probability distribution of all classes, assume we have 4 classes, for each image I would get a probability vector like [0.1,0.8,0.05,0.05]. Then, I use weighted average to merge all probability, and I got a problem, If I have 3 images one image is ambiguous and get a low rank on the right classes, suppose the first class is the right class[0.1,0.4,0.3,0.3], and the other two images I get a high rank in the first class [0.98,0.0001,0.003,0.016], for a human, it's very certain this product belongs to the first class, but after weighted average the probability might like[0.68,0.1,0.05,0.03].

I also try to build a simple RNN model which accepts all probability vectors, and it didn't work.

2.Combine all images into a single data block

Most product images are RGB image, from a mathematic view, it's a 3rd order tensor with shape (3,width,height), and each element in the tensor is an integer from 0 to 255.

First, I convert all images into a grayscale image, now the image's shape is (width, height), it's a matrix. I limit a max number of images as N, if the number of images is less than N, I would fill some blank images, a matrix with all elements set to zero. Second, I merge these images on the 3rd axis, after that, I got a tensor with shape (N, width, height), Finally, I build a model can accept the tensor. But I failed, I got a different result when I reorder the images.

I think the reason why I failed is after convolution and pooling layers I get a 3rd order tensor, I need to reshape the tensor to a vector and pass it to the final classifier, that's the job the Keras Flatten layer did, and it's more like a weighted average job. when I change the order of the images, I would get a different vector before the classifier. 

3.Add attention mechanism to the model

As I mentioned above, the weighted average caused the problem, I want to do something prevent weighted average before Flatten layer. Attention mechanism is a new technique always be used in RNN, it can make the model learn which part is more important and pay attention to that part. I flowed keras-attention-mechanism to add the attention mechanism to my model. But I failed like before.

Attention mechanism can't promise to pass a same tensor to the classifier with a different order of images.

Some thoughts

Like this paper mentioned, I think to deal with non-sequential data, we need to use some statistics feature.