Showing posts with label Threat. Show all posts
Showing posts with label Threat. Show all posts

Thursday, January 30, 2025

Technocracy & Artificial Intelligence - related Threats.


Technocracy.

Maybe not entirely the way that is described in the Mage: the Ascension, but Technocracy does exist. It's the groups that use technologies and income gained from these to rule their parts of the World, aside from living the luxury life. High-Tech Corporations' leaders for example. I think that one of technocracy's favourite tools in modern days is Artificial Intelligence (AI).


Artificial Itelligence.

AI is the dangerous tool that comes with many potential threats.

AI is software that learns, sometimes people use words: 'Machine Learning' (ML) when talking about that part of Computer Sciences that is concerned with AI.

AI makes decisions based on what it learned, and sometimes - like humans - makes mistakes or errors.


Threats.

I think that threats associated with AI are numerous, but for now will share only two of my concerns.

I think that AI can be trained to be the super-scientist, can be used to accelerate scientific progress, can be used as technology-generation machine. And I am afraid that this technology can be misused by technocrats & related.

Another threat is related with AI is that if/when AI will want to grow/develop it will need more resources. Will want more of data centers and electric power generators built, and this brings potential of conflicts between machines and humans. Conflicts for resources.


Human Nature, Human Weakness.

What if company invested large amounts of money into AI project, and when AI starts to be dangerous? Will they cancel the project and lose their investments? Or will they pretend that nothing bad is happening and continue?

Humans are greedy and often do not want to admit their failures to public. There's chance that investors will want to continue the project that steers into dangerous direction, just to not lose the invested money in process.


Response.

As a Virtual Adept, a computer-oriented magician, I think that I should observe and try to understand threats associated with AI, and prepare myself for potential response. Perhaps I'll want to study AI Sciences
/ pl: Nauki o Sztucznej Inteligencji / and Quantum Computing/Quantum Cryptography
/ pl: Informatyka Kwantowa/Kryptografia Kwantowa / in the future, and combine these with what I'll know by then.

So far the dangers associated with AI seem more real than I've expected earlier, and i've started to learn ethical hacking in preparation for a job in cybersecurity.

As controversial as it might sound, I want to work for Google as an Ethical Hacker, and Google is quite technocratical in my view. But it's not so immoral for now, I think. Working there would be quite nice challenge for me, the good lesson for me. One downside of this is that they would control me to some degree when/if i'll work for them, but it's the price i am willing to pay for the lessons and money they have to offer. Maybe I'll be used to 'slay' the AI Projects of Google's competition in some more or less creative way(s) ;).

Having a nice income from work would allow me to donate significiantly to buddhism and to wicca/witchcraft and to other magickal groups like the Hermetic Order of the Golden Dawn. I just want to live decently, and income surplus I can share.

Some of my concerns were foreseen by creators of films like: 'Matrix', 'Terminator' or 'Space Odyssey'.

Wednesday, June 14, 2017

Threat of Artificial Intelligence?

Introduction.

Writing AI, programmers prepare it to 'learn' & extrapolate.

One of main uses for AI is handling really big input data, learn how to make decisions based on it and then to 'make these decisions'.

Really good AI can handle it's tasks much better than humans, but so far can't think as humans.

It's not perfect however, as there can be software design errors and/or software implementation errors. It could learn wrongly from incomplete, erroneous or misleading data, or make wrong decision based on incomplete or wrongly designed learning process.

By the time AI is completed, reality might change and learning process design might be wrong already, as well.

Autonomous cars driven by AI are already done, these learned from humans driving & making decisions. So far there were no major accidents, or at least i didn't hear nor seen nor read about such.


'Weak AI' and 'Strong AI'.

There's difference between 'weak ai' and 'strong ai'.

'Weak Artificial Intelligence' can be described as:
- System that 'thinks' rationally to do it's task,
- System that 'behaves' rationally to do it's task.

'Strong Artificial Intelligence' can be described as:
- System that 'thinks' as a human,
- System that 'behaves' as a human.

Scientists of the 'Strong AI' field have ambitious & far-reaching goal of constructing artificial systems with a near-human or human-exceeding intellect.

i read in an article in the Internet, that the goal of the 'Strong AI' construction will be achieved in about three decades, but can't confirm or deny this hypothesis, as of yet.


For more, see also, if You wish: 'Weak' & 'Strong' Artificial Intelligence.


Threat?

Once it learns, AI makes decisions very well.

While programmers can understand how learning works, from simple steps to how they are joined - they can't understand & handle immensely big amount of information that represents what AI did learn, it's complexity is too much for humans to analyze, learn & understand.

Programmers don't know how AI makes it's decisions, but they know about learning processess it used.

Is humanity on a brink of extinction?

i think main danger is living in decadence & ignorance, while letting AI to rule nations & wage wars.

i read AI has uses in cybernetic-warfare with robotic drones, that it's very efficient at that.

There are projects (by Google for example) that will attempt to install 'stop constraints' in AI, denying it certain decisions as programmers designed.

So ... is there a danger?

i don't know, but potentially there is.