User:The Transhumanist/Sandbox002

From Wikipedia, the free encyclopedia

Other software[edit]

  • AQUA
  • BORIS
  • CTAKES – open-source natural language processing system for information extraction from electronic medical record clinical free-text. It processes clinical notes, identifying types of clinical named entities — drugs, diseases/disorders, signs/symptoms, anatomical sites and procedures. Each named entity has attributes for the text span, the ontology mapping code, context (family history of, current, unrelated to patient), and negated/not negated. Also known as Apache cTAKES.
  • DMAP
  • ETAP-3 – proprietary linguistic processing system focusing on English and Russian.[1] It is a rule-based system which uses the Meaning-Text Theory as its theoretical foundation.
  • FASTUS – Cascaded finite-state transducer for extracting information from natural-language text
  • FRUMP
  • Iris – personal assistant application for Android. The application uses natural language processing to answer questions based on user voice request.
  • IPP[disambiguation needed]
  • JAPE – the Java Annotation Patterns Engine, a component of the open-source General Architecture for Text Engineering (GATE) platform. JAPE is a finite state transducer that operates over annotations based on regular expressions.
  • Learning Reader – Forbus et al. 2007
  • LOLITA – "Large-scale, Object-based, Linguistic Interactor, Translator and Analyzer". LOLITA was developed by Roberto Garigliano and colleagues between 1986 and 2000. It was designed as a general-purpose tool for processing unrestricted text that could be the basis of a wide variety of applications. At its core was a semantic network containing some 90,000 interlinked concepts.
  • Maluuba – intelligent personal assistant for Android devices, that uses a contextual approach to search which takes into account the user's geographic location, contacts, and language.
  • METAL MT – machine translation system developed in the 1980s at the University of Texas and at Siemens which ran on Lisp Machines.
  • Never-Ending Language Learning – semantic machine learning system developed by a research team at Carnegie Mellon University, and supported by grants from DARPA, Google, and the NSF, with portions of the system running on a supercomputing cluster provided by Yahoo!.[2] NELL was programmed by its developers to be able to identify a basic set of fundamental semantic relationships between a few hundred predefined categories of data, such as cities, companies, emotions and sports teams. Since the beginning of 2010, the Carnegie Mellon research team has been running NELL around the clock, sifting through hundreds of millions of web pages looking for connections between the information it already knows and what it finds through its search process – to make new connections in a manner that is intended to mimic the way humans learn new information.[3]
  1. ^ "МНОГОЦЕЛЕВОЙ ЛИНГВИСТИЧЕСКИЙ ПРОЦЕССОР ЭТАП-3". Iitp.ru. Retrieved 2012-02-14.
  2. ^ "Aiming to Learn as We Do, a Machine Teaches Itself". New York Times. October 4, 2010. Retrieved 2010-10-05. Since the start of the year, a team of researchers at Carnegie Mellon University — supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo — has been fine-tuning a computer system that is trying to master semantics by learning more like a human. {{cite news}}: Cite has empty unknown parameter: |coauthors= (help)
  3. ^ Project Overview, Carnegie Mellon University. Accessed October 5, 2010.