SULJE VALIKKO

Englanninkielisten kirjojen poikkeusaikata... LUE LISÄÄ

avaa valikko

Advances in Machine Learning
157,40 €
MP-ARC Arcler Press
Sivumäärä: 404 sivua
Asu: Kovakantinen kirja
Julkaisuvuosi: 2018, 30.01.2018 (lisätietoa)
Kieli: Englanti
Recently, a new field of computer science was derived, including methods and techniques of problem solving that cannot be easily described by traditional algorithms. This field, called ""cognitive computing"" or ""real-world computing"", has a varied set of working methodologies, such as: fuzzy logic, approximate reasoning, genetic algorithms, chaos theory, and the Artificial Neural Networks (ANN). The objective of the present work is to introduce the problematic of the latter: definitions, principles and typology, as well as concrete applications in the field of information retrieval.

During the past decade in the field of information retrieval has been experimented with artificial Intelligence (AI) techniques based on rules and knowledge. These techniques seem to have many limitations and difficulties of application, so that already in the present decade work has begun with the more recent. AI techniques, based on inductive learning: symbolic learning, genetic algorithms and neural networks (Chen, 1995).

The earliest work in neural computing dates back to the early 1940s, which neuro-physicist Warren McCulloch and mathematician Walter Pitts proposed, based on their system studies. A formal neuron model implemented by electrical circuits (McMulloch, 1943), whose enthusiasm aroused the neuronal model drove research in this line during the 1950s and 1960s. In 1957 Frank Rosenblatt developed the Perceptron, a network model that possesses the generalization capability, so it has been used to this day in various applications, generally in recognition of patterns. In 1959 Bernard Widrow and Marcial Hoff of Stanford University developed the model ADALINE (ADAptative LINear Elements), first ANN applied to a real problem (noise filters in lines phone calls).

In 1969 Marvin Minsky and Seymour Papert, of MIT, published a work in which they attack the neural model and consider that any research along these lines was sterile (Minsky, 1969). Due to this criticism the works on ANN stop to a new impetus during the 80's. Despite this pause, several researchers continued to work in that direction during the 1970s. Such is the case of the American James Anderson which develops the BSB (Brain-State-in-a-Box) model, or Finnish Teuvo Kohonen who does the same with one based on self-organizing maps.

As of 1982 the interest for the neuronal computation began to take force again. The progress made in hardware and software, methodological advances around learning algorithms for ANN, and the new techniques of artificial intelligence, favored this rebirth. The same year, the first conference between neuronal computing researchers from the US and Japan. In 1985 the American Institute of physics establishes annual meeting Neural Networks for Computing. In 1987 the IEEE held the first conference on ANN. That same year the International Society of Neural Networks was created (INNS).

An automatic learning system that identifies the expressions of denial and speculation in biomedical texts is presented, specifically in the collection of BioScope documents. The objective of the work is to compare the efficiency of this approach centered in automatic learning with which it is based on regular expressions. Between the systems that follow this latter approach, we have used NegEx because of its availability and popularity. The evaluation has been carried out on the three subcollections that form BioScope: clinical documents, scientific articles and abstracts of scientific articles. The results show the superiority of the approach based on automatic learning regarding the use of regular expressions. In the identification of negation expressions, the system improves the F1 measure of NegEx between 20 and 30%, depending on the collection of documents. In the identification of speculation, the proposed system exceeds the measure F1 of the best baseline algorithm between 10 and 20%.

Tuotetta lisätty
ostoskoriin kpl
Siirry koriin
LISÄÄ OSTOSKORIIN
Tuote on tilapäisesti loppunut ja sen saatavuus on epävarma. Seuraa saatavuutta.
Myymäläsaatavuus
Helsinki
Tapiola
Turku
Tampere
Advances in Machine Learning
Näytä kaikki tuotetiedot
ISBN:
9781773610672
Sisäänkirjautuminen
Kirjaudu sisään
Rekisteröityminen
Oma tili
Omat tiedot
Omat tilaukset
Omat laskut
Lisätietoja
Asiakaspalvelu
Tietoa verkkokaupasta
Toimitusehdot
Tietosuojaseloste