This is the supplementary material for the article "How Meaningful Are Similarities in Deep Trajectory Representations?".
Finding similar trajectories is a fundamental task in moving object databases. However, classical similarity models face several issues to find similar trajectories, such as scalability and robustness. Recently, a new paradigm has been proposed which transforms the trajectories into a high dimensional vector space with the property that if two trajectories are similar, their corresponding vectors in this new space will be close to each other. The model is called t2vec (trajectory to vector). Although, t2vec theoretically overcomes the aforementioned weaknesses of the classical models, there exists no systematic evaluation of its learned vectors and their quality as well as the semantics of their similarity values. In this paper our contribution is two-fold. First, we test the robustness of t2vec, by evaluating how different parameters affect the learned vectors and their similarity values. For this, we calculate the similarity value distributions of t2vec models trained with systematically changing parameter settings. Based on this, we draw conclusions on the semantics of its similarity values as well as on the robustness of the t2vec. Second, we evaluate the quality of the learned vectors by comparing the embedding model with a state-of-the-art classical model. We show that t2vec is orders of magnitude more scalable than classical models, while being semantically closer to human perception of trajectory similarity and being qualitatively better in clustering trajectories. Finally, we give public access to all the trained t2vec models which we use in this paper, which is one of the biggest collections of its kind.
Trajectory Embedding Models
Here we provide all the trajectory embedding models we have trained for the publication. They are grouped the same as in the paper.
- Training set size parameter evaluation: [Trainsetsize.7z]
- Dimensionality of the embedding space parameter evaluation: [Embeddingspace.7z]
- Dimensionality of cell representation parameter evaluation: [Cellrepresentation.7z]
- Grid cell size parameter evaluation: [Cellsize.7z]
- Loss function parameter evaluation : [Lossfunc.7z]
We provide the scripts of our experiments here [Scripts.7z]. It contains the following scripts:
- Training set size experiments
- Dimensionality of the embedding space experiments
- Dimensionality of cell representation experiments
- Grid cell size experiments
- Loss function experiments