Some notes and thoughts from watching the @Plinz interview with @lexfridman.
First of all, great discussion.
https://youtu.be/e8qJsk1j2zE?si=ruyXxm_r7kuRrrPR…
Joscha discusses the stages of lucidity that to me are similar to Maslow's pyramid but with the twist of contemplating additional stages on your way to self-actualization.
Levels of lucidity:
Stage 1: Reactive survival (infant)
Stage 2: Personal self (young child)
Stage 3: Social self (adolescence, domesticated adult)
Stage 4: Rational agency (self-direction)
Stage 5: Self authoring (full adult, wisdom)
Stage 6: Enlightenment
Stage 7: Transcendence
I would add to the stages of Lucidity the levels/circle of knowledge.
Level 1: The stuff you know
Level 2: The stuff you know that exist but haven't learned how to do
Level 3: The stuff you don't know that you don't know
This whole area falls in the "stuff you don't know that you don't know" for most people including myself to an extend, as I've gone down these rabbit holes in the past.
This would translate into a better understanding of ourselves if this knowledge was accessible to everyone which is not the case.
What is certain and currently happening, is that this knowledge is pre-programmed into the logic of systems that we use every day and more increasingly powered by AI that have a level of agency that we don't even realize is possible.
This level of understanding is not afforded by most people but is exceedingly being trained into LLMs.
If we take a 10,000 feet view, this is the first time in human history where we are dumping human knowledge and collective thinking, through training models and scraping of data from places likes http://archive.org and social medial into what is essentially going to be an advanced AGI or Super AGI.
That can be a tremendous leap in our evolution but equally a terrifying extrapolation of unintentional social engineering through LLMs hallucinations.
In cybersecurity we spend a lot of our time with compliance and tweaking controls while criminals are learning about psychology and how to manipulate people through social engineering before they look at exploiting firewalls or take advantage of misconfiguration. http://Shodan.io is full of the latter.
We are willingly creating black boxes that can analyze us better that we can understand ourselves, leading into an easier manipulation and reality distortion if left unchecked.
I've always believed Psychology should be required learning for most fields but especially Cybersecurity.
At this point in time, I think it is even more critical.
At the very least, this would result in some sort of ethics review to determine the level of intelligence of AI and safeguards and boundaries implemented.
This is necessary because Psychologists don't necessarily understand LLMs or their capabilities and technical people don't understand psychology.
Is there another role that could bridge this gap?
#LLMs #AI #psychology #cybersecurity
First of all, great discussion.
https://youtu.be/e8qJsk1j2zE?si=ruyXxm_r7kuRrrPR…
Joscha discusses the stages of lucidity that to me are similar to Maslow's pyramid but with the twist of contemplating additional stages on your way to self-actualization.
Levels of lucidity:
Stage 1: Reactive survival (infant)
Stage 2: Personal self (young child)
Stage 3: Social self (adolescence, domesticated adult)
Stage 4: Rational agency (self-direction)
Stage 5: Self authoring (full adult, wisdom)
Stage 6: Enlightenment
Stage 7: Transcendence
I would add to the stages of Lucidity the levels/circle of knowledge.
Level 1: The stuff you know
Level 2: The stuff you know that exist but haven't learned how to do
Level 3: The stuff you don't know that you don't know
This whole area falls in the "stuff you don't know that you don't know" for most people including myself to an extend, as I've gone down these rabbit holes in the past.
This would translate into a better understanding of ourselves if this knowledge was accessible to everyone which is not the case.
What is certain and currently happening, is that this knowledge is pre-programmed into the logic of systems that we use every day and more increasingly powered by AI that have a level of agency that we don't even realize is possible.
This level of understanding is not afforded by most people but is exceedingly being trained into LLMs.
If we take a 10,000 feet view, this is the first time in human history where we are dumping human knowledge and collective thinking, through training models and scraping of data from places likes http://archive.org and social medial into what is essentially going to be an advanced AGI or Super AGI.
That can be a tremendous leap in our evolution but equally a terrifying extrapolation of unintentional social engineering through LLMs hallucinations.
In cybersecurity we spend a lot of our time with compliance and tweaking controls while criminals are learning about psychology and how to manipulate people through social engineering before they look at exploiting firewalls or take advantage of misconfiguration. http://Shodan.io is full of the latter.
We are willingly creating black boxes that can analyze us better that we can understand ourselves, leading into an easier manipulation and reality distortion if left unchecked.
I've always believed Psychology should be required learning for most fields but especially Cybersecurity.
At this point in time, I think it is even more critical.
At the very least, this would result in some sort of ethics review to determine the level of intelligence of AI and safeguards and boundaries implemented.
This is necessary because Psychologists don't necessarily understand LLMs or their capabilities and technical people don't understand psychology.
Is there another role that could bridge this gap?
#LLMs #AI #psychology #cybersecurity
Originally posted on X: here