When we consider thinking, we usually equalize many concepts.
There can be:
- Thinking of beings,
- Artificial thinking,
- Abstract thinking.
Thinking beings can be living or dead (during bardo, before reincarnation).
Artificial thinking refers to decision making processes of computers, including artificial intelligence learning & decision making. Computers do not think, these calculate with side effects. Computers do not have Mind.
Abstract thinking is generalization of thinking of beings and artificial thinking, but artificial thinking is not the same as thinking of beings. Precisely speaking - artificial thinking is not thinking, but computation - calculation with side effects.
Can artificial thinking induce telepathy?
i think yes - because telepathic beings can be aware of artificial intelligence's learning/decisions - for example if they read it on a monitor screen; then they initiate their own telepathy.
But without telepathic beings, artificial intelligence would not have telepathic abilities, i think.
There are also things to consider as:
- media presence, especially in the internet that can be affected by ai; memes,
- subliminal messages; 'the subtler Art, the better',
- cellphone messages,
- Integrated Information Theory & Panpsychism.
... with a little of a Cat.
Wicca & Witchcraft, Esoterics, Magick, Spirituality, Arts: Martial & Visual, Digital, Computer Tricks, Lifestyle, Philosophy & Entertainment.
... Magick of Candles, Incense & Crystals .... Magick of Moon, of Mind, of TAROT & Sigils ... and more.
Showing posts with label Artificial. Show all posts
Showing posts with label Artificial. Show all posts
Saturday, July 22, 2017
Wednesday, June 14, 2017
Threat of Artificial Intelligence?
Introduction.
Writing AI, programmers prepare it to 'learn' & extrapolate.
One of main uses for AI is handling really big input data, learn how to make decisions based on it and then to 'make these decisions'.
Really good AI can handle it's tasks much better than humans, but so far can't think as humans.
It's not perfect however, as there can be software design errors and/or software implementation errors. It could learn wrongly from incomplete, erroneous or misleading data, or make wrong decision based on incomplete or wrongly designed learning process.
By the time AI is completed, reality might change and learning process design might be wrong already, as well.
Autonomous cars driven by AI are already done, these learned from humans driving & making decisions. So far there were no major accidents, or at least i didn't hear nor seen nor read about such.
'Weak AI' and 'Strong AI'.
There's difference between 'weak ai' and 'strong ai'.
'Weak Artificial Intelligence' can be described as:
- System that 'thinks' rationally to do it's task,
- System that 'behaves' rationally to do it's task.
'Strong Artificial Intelligence' can be described as:
- System that 'thinks' as a human,
- System that 'behaves' as a human.
Scientists of the 'Strong AI' field have ambitious & far-reaching goal of constructing artificial systems with a near-human or human-exceeding intellect.
i read in an article in the Internet, that the goal of the 'Strong AI' construction will be achieved in about three decades, but can't confirm or deny this hypothesis, as of yet.
For more, see also, if You wish: 'Weak' & 'Strong' Artificial Intelligence.
Threat?
Once it learns, AI makes decisions very well.
While programmers can understand how learning works, from simple steps to how they are joined - they can't understand & handle immensely big amount of information that represents what AI did learn, it's complexity is too much for humans to analyze, learn & understand.
Programmers don't know how AI makes it's decisions, but they know about learning processess it used.
Is humanity on a brink of extinction?
i think main danger is living in decadence & ignorance, while letting AI to rule nations & wage wars.
i read AI has uses in cybernetic-warfare with robotic drones, that it's very efficient at that.
There are projects (by Google for example) that will attempt to install 'stop constraints' in AI, denying it certain decisions as programmers designed.
So ... is there a danger?
i don't know, but potentially there is.
Writing AI, programmers prepare it to 'learn' & extrapolate.
One of main uses for AI is handling really big input data, learn how to make decisions based on it and then to 'make these decisions'.
Really good AI can handle it's tasks much better than humans, but so far can't think as humans.
It's not perfect however, as there can be software design errors and/or software implementation errors. It could learn wrongly from incomplete, erroneous or misleading data, or make wrong decision based on incomplete or wrongly designed learning process.
By the time AI is completed, reality might change and learning process design might be wrong already, as well.
Autonomous cars driven by AI are already done, these learned from humans driving & making decisions. So far there were no major accidents, or at least i didn't hear nor seen nor read about such.
'Weak AI' and 'Strong AI'.
There's difference between 'weak ai' and 'strong ai'.
'Weak Artificial Intelligence' can be described as:
- System that 'thinks' rationally to do it's task,
- System that 'behaves' rationally to do it's task.
'Strong Artificial Intelligence' can be described as:
- System that 'thinks' as a human,
- System that 'behaves' as a human.
Scientists of the 'Strong AI' field have ambitious & far-reaching goal of constructing artificial systems with a near-human or human-exceeding intellect.
i read in an article in the Internet, that the goal of the 'Strong AI' construction will be achieved in about three decades, but can't confirm or deny this hypothesis, as of yet.
For more, see also, if You wish: 'Weak' & 'Strong' Artificial Intelligence.
Threat?
Once it learns, AI makes decisions very well.
While programmers can understand how learning works, from simple steps to how they are joined - they can't understand & handle immensely big amount of information that represents what AI did learn, it's complexity is too much for humans to analyze, learn & understand.
Programmers don't know how AI makes it's decisions, but they know about learning processess it used.
Is humanity on a brink of extinction?
i think main danger is living in decadence & ignorance, while letting AI to rule nations & wage wars.
i read AI has uses in cybernetic-warfare with robotic drones, that it's very efficient at that.
There are projects (by Google for example) that will attempt to install 'stop constraints' in AI, denying it certain decisions as programmers designed.
So ... is there a danger?
i don't know, but potentially there is.
Subscribe to:
Posts (Atom)