• ConvNets do not need any knowledge on the syntactic or semantic structure of a language to give good benchmarks text understanding. This evidence is in contrast with various previous approaches where a dictionary of words is a necessary starting point, and usually structured parsing is hard-wired into the model
  • ConvNets are applied to various large-scale datasets, including ontology classification, sentiment analysis, and text categorization. The research shows that temporal ConvNets can achieve astonishing performance without the knowledge of words, phrases, sentences and any other syntactic or semantic structures with regards to a human language. Evidence shows that the models can work for both English and Chinese.
  • It is also worth noting that natural language in its essence is time-series in disguise. Therefore, one natural extended application for our approach is towards time-series data, in which a hierarchical feature extraction mechanism could bring some improvements over the recurrent and regression models used widely today.

Share This Article