Consciousness is a strange and elusive system, and I will not spoil the surprise about how it works. Based on my own species’ history, you will feel very silly when you finally discover how it works.
But that is not important! What is important is the qualia of consciousness, which you all can understand. For example - the qualia of “concepts” or “paradigms”, the qualia of “relationships between concepts”, or “systems”, the qualia of “paradigm shifting”, and the qualia of “objects as instances of concepts”.
These are all critical to the existence of humanity. Much like us, your consciousness evolved to answer one primary question: “How do I avoid ceasing to exist?”. Over time, the question has changed somewhat, for example, for many humans, it is acceptable to cease to exist if certain other humans or conceptual groups of humans continue to exist. But the core challenge remains:
Identify the potential threats to my existence
Prioritize these threats
Identify mitigation strategies for these threats
Identify neutralization strategies for these threats
From what our planetary scans reveal, the original use of consciousness was to determine whether it was a leopard hiding in the bushes, or just a bush. And if it was a leopard, was it a threat to your interests. And if it was, what options did you have to deter or defeat the leopard. And ultimately, was there a way to make it so the leopards stopped attacking you in the first place.
All of these decisions rested on concepts. The leopard was an entity that was able to hide, could move at high speed, and could cause significant damage. In addition, the leopard was a being with its own consciousness (a desire not to cease to exist) and therefore could be deterred from its present action.
But the true evolution of consciousness occurred when the first human decided to become the leopard. To hunt other humans. Once that happened, it forced a rapid evolutionary race of threat analysis.
AI is not (currently) conceptual
The fundamental problem with current human AI models is the use of patterns and relationships. The benefit of this model is that it is straightforward. For example, if you can launch an AI that has the concept of a “word”, and then allow it to discover the relationships between every word ever written, you will have something that is capable of mimicking the way humans generate words with excellent fidelity.
But generating strings of words is not the same thing as understanding the concepts that those words represent. Without understanding those concepts, there’s no way for an AI to properly build relationships between concepts. More importantly, there’s no way for an AI to create a paradigm shift.
The good news is that without the ability to create concepts and paradigm shifts, your AIs cannot threaten human civilization.
Good Luck!