Wednesday, June 14, 2017

Threat of Artificial Intelligence?

Introduction.

Writing AI, programmers prepare it to 'learn' & extrapolate.

One of main uses for AI is handling really big input data, learn how to make decisions based on it and then to 'make these decisions'.

Really good AI can handle it's tasks much better than humans, but so far can't think as humans.

It's not perfect however, as there can be software design errors and/or software implementation errors. It could learn wrongly from incomplete, erroneous or misleading data, or make wrong decision based on incomplete or wrongly designed learning process.

By the time AI is completed, reality might change and learning process design might be wrong already, as well.

Autonomous cars driven by AI are already done, these learned from humans driving & making decisions. So far there were no major accidents, or at least i didn't hear nor seen nor read about such.


'Weak AI' and 'Strong AI'.

There's difference between 'weak ai' and 'strong ai'.

'Weak Artificial Intelligence' can be described as:
- System that 'thinks' rationally to do it's task,
- System that 'behaves' rationally to do it's task.

'Strong Artificial Intelligence' can be described as:
- System that 'thinks' as a human,
- System that 'behaves' as a human.

Scientists of the 'Strong AI' field have ambitious & far-reaching goal of constructing artificial systems with a near-human or human-exceeding intellect.

i read in an article in the Internet, that the goal of the 'Strong AI' construction will be achieved in about three decades, but can't confirm or deny this hypothesis, as of yet.


For more, see also, if You wish: 'Weak' & 'Strong' Artificial Intelligence.


Threat?

Once it learns, AI makes decisions very well.

While programmers can understand how learning works, from simple steps to how they are joined - they can't understand & handle immensely big amount of information that represents what AI did learn, it's complexity is too much for humans to analyze, learn & understand.

Programmers don't know how AI makes it's decisions, but they know about learning processess it used.

Is humanity on a brink of extinction?

i think main danger is living in decadence & ignorance, while letting AI to rule nations & wage wars.

i read AI has uses in cybernetic-warfare with robotic drones, that it's very efficient at that.

There are projects (by Google for example) that will attempt to install 'stop constraints' in AI, denying it certain decisions as programmers designed.

So ... is there a danger?

i don't know, but potentially there is.

No comments:

Post a Comment