Our Science

We aim for rapid, reproducible analysis of complex text data, neither reductionist nor biased.

Leximancer is text mining software that can be used to understand the content of collections of textual documents and to visually display the extracted information. The information is displayed by means of a conceptual map that provides an overview of the material, representing the main concepts contained within the text and how they are related.

Leximancer uses deep learning to extract a transparent three-level network model of meaning from the data. By default, Leximancer is fully data-driven, or unsupervised, but the deep semantic model can be directed by the user to target key topics and relationships. This offers the combined benefits of deep learning, transparency, and user control, without the need for laborious human tagging of the data or dictionary building. Read more

You can also read an excellent review of automated content analysis software here.

 

Leximancer Validation Article

A. E. Smith and M. S. Humphreys (2006). Evaluation of Unsupervised Semantic Mapping of Natural Language with Leximancer Concept Mapping. Behavior Research Methods, 38 (2), 262-279. PDF

Articles about Text Analytics

  • Seven Questions to ask your Text Analytics Vendor, Read

  • Rapidly Acquiring Reliable Expert Knowledge with Text Analytics, Read

  • Real-world natural language is structured as a small world network.

    Senekal, B. A., & Geldenhuys, C. (2016). Afrikaans as a complex network: The word co-occurrence network in André P. Brink’s Donkermaan in Afrikaans, Dutch and English. Suid-Afrikaans Tydskrif vir Natuurwetenskap en Tegnologie/South African Journal of Science and Technology, 35(1), 9-bladsye., Read.

Leximancer Case Study

“Conciliatory Modeling of Web Sources Using Automated Content Analysis: A Vignette on Drilling for Shale Gas in Upstate New York.”

This is an example of how Leximancer can be used to facilitate open debate on sensitive topics with superior use of profiling. Mike Coombs, Read

Mature automatic content analysis methodology in the literature

In the last two years, several researchers have established a rigorous procedure for the characterisation of content. The papers below are excellent examples, and we strongly recommend them as a blueprint for performing grounded ‘sense-making’ of text data.

  • Lin, X., Zhang, H., Wu, H., & Cui, D. (2020). Mapping the knowledge development and frontier areas of public risk governance research. International Journal of Disaster Risk Reduction, 43, 101365. Article

  • Nunez‐Mir, G. C., Iannone, B. V., Pijanowski, B. C., Kong, N., & Fei, S. (2016). Automated content analysis: addressing the big literature challenge in ecology and evolution. Methods in Ecology and Evolution, 7(11), 1262-1272. doi:10.1111/2041-210X.12602 Article

  • Cheng, M., & Edwards, D. (2017). A comparative automated content analysis approach on the review of the sharing economy discourse in tourism and hospitality. Current Issues in Tourism. Advance online publication. doi: 10.1080/13683500.2017.1361908 Article

  • Randhawa, K., Wilden, R., & Hohberger, J. (2016). A bibliometric review of open innovation: Setting a research agenda. Journal of Product Innovation Management, 33(6), 750-772. doi: 10.1111/jpim.12312 Article

Find more recent publications here.