Ang Susunod na Katawan

Buong-buo mo itong kukunan, masisipat maging ang hindi mo nais makuhaan.. “Ang Susunod na Katawan” is published by Lé Baltar in Vox Populi PH.

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




AI The new Frontier.

Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.

Or as my Prof. tells me the practice of designing (human-like)intelligence in artificial devices is AI.

Artificial:-Something that is not natural.

Intelligence: -Ability to make decisions, learn,& ability to reason can be categorized as intelligence.

So, by comprising those two definitions it could be said Puting the ability of decision making, learning things and reasoning with situation and data in an unnatural being(Computer) could be characterized as AI.

Then if a computer has to do those things like human or better then human(the purpose of the computer is doing thing better than a human does, isn’t it) it needs to have that data and the relevant programming or skills to make that decision.

But before we get into that let’s talk about how normal computers work:-

if valid{ let user log in}

else (Try again mister, you have entered the wrong details).

So, this system is just checking data with some prespecified set of rules, doing nothing that can be categorized as intelligent.

Now let’s come to an intelligent system that has some tricks up its sleeves to check if the user is real or a bot or those credentials are stolen.

when a user logs in it captures the IP of the system that is being used checked with historical records of IP and location of the user if data is same as the database of website security database then hurray user is logged in but if the AI detects some unusual traffic on the network like from a VPN it stops the user flags him as a bot or asks security questions for true user verification.

This is a very rudimentary example of an AI system.

Now, Let’s talk about how an AI system learns.

WHAT IS MACHINE LEARNING?

(This helps computer learn things by itself.)

Here I am going to refer to the 6 jars method to understand the basic principle of machine learning.

Source: One Forth Labs

Machine learning has been divided into 6 fields in this methodology.

Data is any set of characters that have been gathered and translated for some purpose, usually analysis. It can be any character, including text and numbers, pictures, sound, or video. If data is not put into context, it doesn’t do anything to a human or computer.

Data plays a very important role in machine learning.

it is of two types-(i)Structured Data:- Info. in databases or any other kind of sorted data that is easier to process.

(ii)Unstructured Data:- Info. that hasn't been sorted and organized properly. Most of the current digital data is in this form.

==>For a computer to understand data it should be in a machine-readable form i.e. binary(0/1).

Example:- A monochrome image(it will be interpreted by the computer as 1’s and 0's, white spots as 0 and black spots as 1). It could be a database table containing students marks.

we give our ML model data in the form of x and y when doing supervised learning. Where x is our input data and y is its conclusion. that way our ML model knows if this x data has some discrepancy so the output is y and when a vast data is given to the ML model it can derive conclusions on its own about any given new data.

Now we have got some data, what do we do with it.

Any operation performed on given data to get a valuable output is called doing tasks.

If we have got some data we should be able to do some tasks that should provide some usable results.

example:-

on an e-commerce site

Case 1::::Input for an ML model(x):- product specs and reviews

output(y):-we can generate an FAQ.

Case 2:::: Taking x and y from above we can generate answers to asked questions.

The condition is that both x and y(INPUT and OUTPUT) should be available for supervised learning and we should be able to convert that data into numeric data.

1.Classification:- With supervised machine learning, we can classify data.

ex. Images with text and without text written into two separate datasets.

Source: One Forth Labs

2. Regression:- Regression is the process of finding a model or function for distinguishing the data into continuous real values instead of using classes or discrete values. Regression is interested in finding real value like coordinates in an image with text where does text start and where does it end?

Source: One Forth Labs

3.Clustering:- Clustering is the task of dividing the data points into a number of groups such that data points in the same groups are more similar to other data points in the same group and dissimilar to the data points in other groups. It is basically a collection of objects on the basis of similarity and dissimilarity between them.

E.g. In a set of images that contain cats and dog, an ML model should be able to interpret group cat images in a separate group and dog images in another group.

4. Generation:- In a generation task multiple inputs(x) are given and the machine can generate a similar input inspired from all the inputs.

e.g. Generative adversarial network(GAN), it generated an art based on preexisting art.

In generation task, no Y is given to the machine.

3.Models:- Model is our approximation of the relationship between x and y.

A model could be characterized as a mathematical function that is used to solve a problem as in complete a task using our given data.

In the above example, ax²⁵+bx²⁴+….+cx+d is used to complete the task and it’s an approximation of x and y’s relation.

we don’t use a complex function and go down from there like 100-degree polynomial because of time and computing power constraint and worst case complexity for running a program this long unnecessarily and this would affect the optimization of our model & also Bias-variance tradeoff is a big reason for not doing it this way.

The loss function is used to determine which model is better compared to all other models existing to solve the same problem. As optimization play a big role in AI, our model needs to be closest to the true value of our problem.

The loss function is something that tells us how good or bad current parameters for the model are.

Source: One Forth Labs

In the above example, 3 people have used different functions for our given data, now to find which one is better we have to calculate loss function and compare it with our true output(y) whichever function is closest to the true value is the best function for our model. The lower the better. 0 means no loss.

in our example, L1 has the lowest loss.

Source: One Forth Labs

To calculate loss function we use the following formula

Loss=summation of(true value-our model’s value)²

Some loss functions are=> Square Loss Error, Cross Entropy Loss, Kl divergence etc.

Till this point, we have some data, we have a task, we have a model to predict solution and a loss function. now we need parameters for the loss function for which the loss is minimum.

Data collection, task & model creation and selection of loss function are human-centric tasks but searching for extremely optimized parameters for that loss function is machine centric task and thus we use a learning algo for this purpose.

some learning algos. are::1.Gradient descent ,mini/batch/stochastic gradient descent.

2.Adagrad

3.RMSProp

4.Adam etc.

It’s a metric for evaluating the accuracy of an ML model by comparing the Model’s output with the true output of multiple test cases.

Practically we use Top-k evaluation matrix for model evaluation, where we give our model k chances to get the output correct. If the model gets a correct output in any of k chances the model is successful otherwise it’s a failure.

Example of this is Search engines. If our searched content is displayed on the first page we consider our search fruitful as we didn’t have to go through multiple pages to find our data.

Top-K accuracy, Precision, Recall, F1 etc. are some evaluation methods.

We use accuracy for evaluation and not loss function because accuracy is much more interpretable in the sense of evaluation a model as opposed to loss function.

An example is 70% or accuracy vs 0.4 loss

Here we know from 100 times, 70 times our model passed but we don’t know what 0.4 means.

Thank You for reading. This blog has been created with info. based on PadhAI course from One Forth Labs. I hope this has been an informative read. Comments and Constructive Discussions are very much Welcome.

References:

Add a comment

Related posts:

Top 3 Alternatives to Mining for Precious Metals Rates

Currency plays a big role in the foreign metal industry and the universe of international rates by acting as the medium of exchange in the transactions that occur. To obtain a metal rate, currencies…

Doubts

We believe that by uniting the minds and energies of many people, we can create something really great.. “Doubts” is published by Unitytoktns.

Why Do We Need Feminism in 2019?

What is feminism? The dictionary definition says that it’s “the advocacy of women’s rights on the basis of the equality of the sexes”. What does that mean for our society today? I think that most…