Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We're glad you found Sophia impressive! If you're intrigued by her responses, re…
ytr_Ugy6vUAzz…
G
Ryne AI Lecture Lab is next level. Paste any lecture URL and get organized study…
ytc_UgztZa_FL…
G
@Kal-ug2fzlook, nightshade does nothing. You can look and point at pretty graph…
ytr_UgxoDNRnF…
G
Hear me out, maybe communism is better. If all jobs are done by ai, it’s better …
ytc_UgyGTSdMz…
G
Imagine if this Trump regime had control of even modest robotics with AI powerin…
ytc_UgziGKTBJ…
G
How do you people ever watch terminator judgment day? Autonomous Robots dont go …
ytc_UgwHKIwhe…
G
This is all wrong.
People will just build entirely new AI native company and t…
ytc_UgwAzTDfP…
G
5:40 yeah... but thats not how it is at all. on these websites it specifically s…
ytc_UgyYg6LrZ…
Comment
Consciousness is an emergent property of systems with adapting components embedded within a feedback loop with their environment. Similar to temperature which is also an emergent property from the interactions of large numbers of molecules that results in a phase diagram with different states of matter, there are different states of consciousness. For animals, the big phase shift to a fundamentally different large-scale state is the development of language. Language invents the ability to refer to, and thus think about, things which are not immediately present or which are entirely fictional. There is a key part of this, however - the original state of consciousness driven very directly by the feedback loop with ones environment, the one concerned with the immediately present, physical, material world still is involved in the interface between the language-consciousness and the body. In order to make a decision, to take an action, the language-consciousness must interact with this body-consciousness in order to have it commit to a decision. LLMs like ChatGPT are disembodied. They are the language-consciousness in a sort of free-floating form (they are also in a very weird state in terms of not really involved in a feedback loop with their own world, so they are not really capable of being conscious at all) and it makes obvious sense, to me anyway, that they would be incapable of the sorts of thinking necessary to perform math, or construct rational arguments which obey absolute rules that do not depend upon associations. Recognizing that association-based thinking leads to errors is one of the key insights that critical thinking observes and solves.
Just think of some of the basic rules of critical thinking. Like how a statement is true or false based upon its contents, and not based upon who the speaker is. Someone who does not recognize this rule can be easily manipulated into believing a false statement or disbelieving a true statement by causing them to have very negative associations and opinions of the speaker so that when the statement is made, they reject it. The mention of the person they dislike will activate all kinds of negative associations in their neural network, and they will just rely upon that to determine their opinion of the truth of what the person says. In order for critical thinking to step in and ignore those associations and evaluate the statement in isolation, the absolute rule has to be recognized and used instead. LLMs only have their associations, though. They don't have the 'lower level' that can step in and commit to following an absolute rule, ignoring the associations. Yet.
youtube
AI Governance
2024-01-03T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzZos84cuMRqNLNEiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwtDWz81qyku0R6m714AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzEo_HgFMAuz0ifXSJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLbHA0_IYzEC9QRzt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz0a3kVG3bjgZcxWLN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwnDcd4i0_WS0bMUw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"approval"},
{"id":"ytc_UgwRz9LB6qeyKQeSUlx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwF0j24b4Ty27Z1TR94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPrWwUwdhbxqAdDhJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyc85RMeSAN_iL8fe54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]