In the IR community you will see image2image search implemented by passing an image through a headless CNN and then do ANN search on the embedding space. Then that can lead to discussion about cross-model hashing which you suggested. I think also you touched on LTR techniques w/ siamese networks. For example we can use a siamese network to learn the diff between two embeddings.
In the Recys community for collaborative filtering you can take interaction data between users and items and decompose that matrix into two parts using SVD which is called the user and item embeddings. Also in the recsys community you see people doing item2vec (what I did) for the item embeddings and then aggregating those for the user representation (embeddings).
Maybe you will grant me the novelty of this method in making the connection between queries and converted restaurants as a similarity metric and then applying it to the IR use case of search query expansion? Should the post be more clear that I am not claiming to have invented item2vec or ANN (approximate nearest neighboors)?
Your final paragraph is reasonable, and I do think the post would benefit from more clearly talking about this.
The “trick” is to find some naturally occurring property that indicates a pair of examples has the positive label, such as two users selecting a given product, two text queries leading to clicking on the same image, two different blog posts with the same keyword, etc. This combined with an efficient way to sample acceptable negative pairs that do not express the trait that indicates positive label.
That + deciding how to approach the loss function (e.g. cosine similarity, euclidean distance, distance in a hash space, triplet loss, should it have a margin) is the whole trick of the problem.
The black box that produces embeddings (DNN, doc2vec, sparse vector from tfidf-like features, whatever) and the nearest neighbor piece are standard plug and play.
In the IR community you will see image2image search implemented by passing an image through a headless CNN and then do ANN search on the embedding space. Then that can lead to discussion about cross-model hashing which you suggested. I think also you touched on LTR techniques w/ siamese networks. For example we can use a siamese network to learn the diff between two embeddings.
In the Recys community for collaborative filtering you can take interaction data between users and items and decompose that matrix into two parts using SVD which is called the user and item embeddings. Also in the recsys community you see people doing item2vec (what I did) for the item embeddings and then aggregating those for the user representation (embeddings).
Maybe you will grant me the novelty of this method in making the connection between queries and converted restaurants as a similarity metric and then applying it to the IR use case of search query expansion? Should the post be more clear that I am not claiming to have invented item2vec or ANN (approximate nearest neighboors)?