Page 408 - Kaleidoscope Academic Conference Proceedings 2024
P. 408
2024 ITU Kaleidoscope Academic Conference
DSS [23] same as the
standard Where , such that the design matrix X belongs to
ECG the d dimensional response space, and the response
An intelligent Hybrid Adaboost building variable, CVD, is represented by , which has a binary
hybrid model for , intelligent
classification handling Bagging, and accurate class in the vector Y with in the study. The
model for data class R.F., K- IoT-enabled appropriate discriminating equation is given by
classification imbalance NN and healthcare
[24] SVM systems
METHODOLOGY -------------------- (2)
The techniques of separation of a data set into classes by Similarly, Z represents the vector that determines the
the use of classification techniques are highly used in the coordination of the hyperplane (discriminating plane), and
medical field. Actual separation of both different types of so Z, X, and β are offsets. There are infinite numbers of
data is performed. The starting step in the procedure is possible hyperplanes that are efficiently classified by the
finding the class for available data points, several names training data, which can be applied to the validation
that include target, output, etc. Different mathematical dataset. The optimal classifier shows that similar optimal
theories, such as L.P., D.T., and N.N. involved in generalized hyperplanes are nearer or even away from each
categorization. Coronary disease detection can be done cluster of objects. The input set of coordinates is
through categorization steps because it has two parts, that considered optimally separated by the hyperplane.
is, one has CVD or not.
A) Support Vector Machines (SVM): SVM is used for B) Random Forest (R.F.): This ML algorithms that uses
ramification techniques for data. A non linear mapping concepts of Bagging or Bootstraping aggregation. To
technique is used for converting the data into a higher estimate a value from a data sample, use the mean
dimension for training. To differentiate the points for the bootstrap, which is are powerful statistical approach. Lots
input variables of a hyperplane for classes ranging from 0 of samples of data are taken, and the respective mean is
to 1. A 2D plane helps to show this as a line, and it is calculated; after that, all of the mean values are averaged to
predicted that each point can be completely separated from give a real mean value. In bagging, the sampling method is
its original line. The coordinating distance from the used, but instead of estimating the mean of every data
hyperplane through the adjacent data is called the margin. sample, decision trees are generally used. Here, several
The line that has the most lag margin is helpful for samples of the training data are considered and models are
distinguishing two classes using an optimal hyperplane. generated for every data sample.
The points of this hyperplane are known as support vectors, C) Simple Logistic Regression: In the binary classification
as the name suggests; they help to define or support the method, the values are identified in two classes. Both LR
structure of the hyperplane. In general, optimization and linear regression aim to calculate the coefficient values
techniques are used to calculate the value of the parameters for every input variable correctly. The logistic function acts
which helps in the maximization of the margin level. as a non-linear function, which helps to transform any
Depending on the several kernels, the hyperplane can be range of values from 0 to 1. In logistic regression, the
decided. Kernels are different types like linear, polynomial, prediction made is mainly used for the purpose of
radial, and sigmoid. The hyperplane is used to separate the predicting the probability of a data instance that consists of
locations in the available variable space that contains their either class 0 or class 1. It is necessary for solving problems
class, either 0 or 1. Margin denotes the distance between where rationality is mostly preferred for any particular
the hyperplane and adjacent data coordinates. Optimal prediction. A better work from L.R. can be expected when
hyperplane denotes the line that has the largest margin that attributes are not related to output variables. It uses the
can distinguish between the two classes. These points are sigmoid function for classification, like
called support vectors, as they define or support the
hyperplane. The SVM is widely considered due to its
efficiency in pattern classification techniques. Kim et al. -------------------- (3)
[25] proved that the SVM in the classification for
prognostic prediction. The brief mathematical description In this case, the L.R. coefficients for each example are
based on the SVM model is described below for the
calculation. CVD with the convention of linear divisibility given as will
for training samples, we have
be during the training
phase. Here, the stochastic gradient is used to calculate and
-------- (1) update values like
– 364 –