At the turn of the second decade of the 21st century, humanity is on the verge of a new scientific and technological revolution. Just as 50 years ago, in the 60's of the 20th century, the automation of data processing computers to increase productivity by orders of magnitude, allowing the man to replace the various stages of production, processing is automation knowledge will free from manual labor entire industry related to mass processing of unstructured information. The developed software system allows to extract knowledge from natural language (English, Russian, etc.). Extracted knowledge is converted into a form suitable for use by a computer. Formally described knowledge is called an ontology. Thus, the system builds a text document ontology, representing its semantics. Building an ontology documents solves a task of knowledge systematization (ordering). The automatization of a process of knowledge systematization, which is extracted from texts, can significantly increase the productivity of labor in companies with a large massive of documents. Even a simple text document classification, knowing as correlation of the text with a certain category of a list of (accounting, marketing, internal document, etc.) can significantly increase the performance of the processing of the document, giving it to the processing of the corresponding destination. At the moment there many text documents systematization tools implemented: Classification; summurization (build of short summary of a document) quotations analysis (identify document citations in a given set of other documents); authorship of a text; sentiment analysis of a text. However, the quality of the currently available tools do not allow systematic shift the entire burden to the computer for processing natural language and other semi-structured data sets. Need to move to another, qualitatively different level of processing such data - on the level of meaning, that is, operate directly the knowledge extracted from the processed data. Corresponding area of computer engineering is called semantic analysis. Semantic text analysis allows better handling of documents and, thus, further automate the process. Computer representation of the semantics of the document is referred as ontlogy. One of the most popular methods of representing ontologies in memory - a logical theory, ie finite set of objects and relationships that express the semantics of the content of the document. Is is using mathematical apparatus of descriptive logic to represent the ontology as a logical theory. The standard language for defining ontologies using OWL (RDF). The project of analysis of texts in logical contradictions solves the problem of verification of knowledge extracted from natural language, ie allows you to check whether the document contained contradictory information or not. Verification of the contents of the document is based on inference algorithms working on the content of the ontology, represented as a logical theory.


Markets enterprise search and text analysis is usually in the minds of consumers do not share, so accurate data on the volume of the market analysis of texts not yet available. We estimate the market for enterprise search. According to Gartner, the market for enterprise search in the U.S. (the largest player in this field) is about $ 1.2 billion. Market size analysis of texts can be estimated at approximately 40% of the total market for corporate search. Thus, in the U.S. is about 0.5 billion dollars.

In the Russian Federation market is not developed, its volume is about 20 million U.S. dollars. The main tasks that are necessary to consumers, are the following:

  • Search for documents.
  • Monitoring tools (selection of persons, of different objects).
  • Extracting knowledge.
  • Determination of authorship.
  • Comparison of the documents.
  • Sentiment analysis.
  • Annotate.
In the last 3-5 years there has been a tendency to bring tools for defining ontologies as formal logical theories to specify the semantics of the data being processed. A similar trend is occurring also in Russia. Many companies (including Skolkovo residents) use the tools in the form of the description of the semantics of ontologies in their products.


The company was established in 2012 as a start-up to implement the technology of semantic analysis of natural language within the project "Skolkovo". The emphasis in the research done on the technology of logical analysis of the knowledge learned from the texts belonging to a certain domain, a conceptual framework is described in the form of ontologies, usually using language OWL (RDF). As the flagship product is expected to develop a system for analyzing natural language into logical contradictions.

Investment opportunities


The company is ready to cooperate with investors and, in fact, is currently in active search for cooperation.

The company encourages investments in the first (seed) stage of development of the project, for which it is proposed to develop a set of basic tools and technologies to build a semantic representation of text documents in ontlogy languages, such as OWL (RDF), and to produce a logical analysis of the ontology built.

Development of basic technologies is expected to lead for 1.5 years through a team of 5 developers, a linguist and director. In this phase it's supposing to spend about 25 millions of rubles.

The next step (step 2) is expected to focus on specific products, including the introduction of company's tools for semantic analysis to a variety of companies. This requires the development of formal representations of domains (ontologies) that describe the manufacturing process of the company, which introduced the product.

Stage 2 will finish in 2.5 years, during which it is supposed to dial a customer base that is sufficient to payback.



Team [2]