A timely topic, Aleks. How far can "intelligence" progress without "becoming conscious"?
This is a philosophical question, as well as a practical one. I have a feeling, but so does everybody.
I don't know the "origin of consciousness". Many assume that it arises from platform capabilities, and many others are certain that it is a gift from universal consciousness ("God").
How would either hypothesis be tested? An "objective" test seems impossible.
Is there a subjective test? It would be non-transferrable knowledge.
Certainly, discussing this topic with Diego further would be a good idea.
But yes, I'm convinced that this is what's happening behind the scenes of public applications.
The competition between civilian and military-grade AI bots is notable.
In the best case, it will act like a sort of "market economy" or Darwinism, keeping developers motivated to always be at the forefront of development. This is similar to what happens in the pharmaceutical or military industry. On the other hand, it carries significant risks... :) As you've already pointed out several times.
A timely topic, Aleks. How far can "intelligence" progress without "becoming conscious"?
This is a philosophical question, as well as a practical one. I have a feeling, but so does everybody.
I don't know the "origin of consciousness". Many assume that it arises from platform capabilities, and many others are certain that it is a gift from universal consciousness ("God").
How would either hypothesis be tested? An "objective" test seems impossible.
Is there a subjective test? It would be non-transferrable knowledge.
Perplexing...
https://drjohnsblog.substack.com/p/not-that-future
https://drjohnsblog.substack.com/p/persistent-questions
Very interesting questions, indeed.
I'm sure these are good questions to ask in one of our talks with certain experts.
We will keep that in mind!
Thanks, John! As always.
Good. AI definitely has the potential to change our world significantly.
I think there are two dangers here.
One the use and development of AI by state and non-state actors for nefarious reasons.
Second, that we will reach AGI which on it’s decides that humans are not necessary or take autonomic actions that would hurt us in some way.
Everyone seems to focus on the second but it’s the first that has the most likely chance of happening
I fully agree here.
We will have that discussion on both BMA (in the future in the paid section) and on BMT (free).
I just saw this:
Researchers train AI chatbots to 'jailbreak' rival chatbots - and automate the process
'Masterkey' method means that if a Chatbot is updated, a new jailbreak can be automatically applied.
https://www.tomshardware.com/tech-industry/artificial-intelligence/researchers-train-ai-chatbots-to-jailbreak-rival-chatbots-and-automate-the-process
Certainly, discussing this topic with Diego further would be a good idea.
But yes, I'm convinced that this is what's happening behind the scenes of public applications.
The competition between civilian and military-grade AI bots is notable.
In the best case, it will act like a sort of "market economy" or Darwinism, keeping developers motivated to always be at the forefront of development. This is similar to what happens in the pharmaceutical or military industry. On the other hand, it carries significant risks... :) As you've already pointed out several times.