![]() |
Image source : ai.googleblog.com |
Neural Networks are analytic
techniques modeled after the (hypothesized)
processes of learning in the cognitive system and the neurological functions of
the brain and capable of predicting new observations (on specific variables)
from other observations (on the same or other variables) after executing a
process of so-called learning from existing data.
Neural Networks is one of the Data Mining Techniques. The
first step is to design a specific network architecture (that includes a
specific number of “layers” each consisting of a certain number of “neurons”).
The size and structure of the network needs to match the
nature (e.g., the formal complexity) of the investigated phenomenon. Because
the latter is obviously not known very well at this early stage, this task is
not easy and often involves multiple “trials and errors.” (Now, there
is, however, neural network software that applies artificial intelligence
techniques to aid in that tedious task and finds “the best” network
architecture) .
The new network is then subjected to the process of "training" .
In that phase, neurons apply an iterative process to the
number of inputs (variables) to adjust the weights of the network in order to
optimally predict (in traditional terms, we could say find a "fit"
to) the sample data on which the "training" is performed.
After the phase of learning from an existing data set, the
new network is ready and it can then be used to generate predictions. The
resulting "network" developed in the process of
"learning" represents a pattern detected in the data. Thus, in this
approach, the "network" is the functional equivalent of a
model of relations between variables in the traditional model building
approach.
![]() |
Image source : developers.google.com |
However, unlike in the traditional models, in the "network,"
those relations cannot be articulated in the usual terms used in statistics or
methodology to describe relations between variables (such as, for example, "A
is positively correlated with B but only for observations where the value of C
is low and D is high"). Some neural networks can produce highly
accurate predictions; they represent, however, a typical a-theoretical (one can
say, "a black box") research approach.
That approach is concerned only with practical
considerations, that is, with the predictive validity of the solution and its
applied relevance and not with the nature of the underlying mechanism or its
relevance for any "theory"
of the underlying phenomena.
However, it should be mentioned that Neural Network
techniques can also be used as a component of analyses designed to build
explanatory models because Neural Networks can help explore data sets in search
for relevant variables or groups of variables; the results of such explorations
can then facilitate the process of model building. Moreover,
now there is neural network software that uses sophisticated algorithms to
search for the most relevant input variables, thus potentially contributing
directly to the model building process. One of the major advantages of neural
networks is that, theoretically, they are capable of approximating any
continuous function, and thus the researcher does not need to have any
hypotheses about the underlying model, or even to some extent, which variables
matter.
![]() |
Image source : userlike.com |
An important disadvantage, however, is that the final solution depends on the initial conditions
of the network, and, as stated before, it is virtually impossible to "interpret" the solution in
traditional, analytic terms, such as those used to build theories that explain
phenomena.
Some authors stress the fact that neural networks use, or
we should say are expected to use, massively parallel computation models. For
example Haykin (1994) defines neural network as:
"a massively parallel distributed processor that
has a natural propensity for storing experiential knowledge and making it
available for use. It resembles the brain in two respects:
(1) Knowledge is acquired by the
network through a learning process, and
(2) Interneuron connection
strengths known as synaptic weights are used to store the knowledge."
However, as Ripley (1996) points out, the vast
majority of contemporary neural network applications run on single-processor
computers and he argues that a large speed-up can be achieved not only by
developing software that will take advantage of multiprocessor hardware by also
by designing better (more efficient) learning algorithms.
Neural networks is one of the methods used in Data Mining;
see also Exploratory Data Analysis. For more information on neural networks, see Haykin
(1994), Masters (1995), Ripley (1996), and Welstead (1994).
For a discussion of neural networks as statistical tools,
see Warner and Misra (1996) .
Reference source : documentation(dot)statsoft(dot)com › STATISTICAHelp › NeuralNetworks
Neural Networks in Data Mining, Exploratory Data Analysis
Reviewed by Developer
on
July 12, 2018
Rating:

No comments: