Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is the worst thing ever created! We literally are destroying ourselves! I fee…
ytc_Ugz71QVnw…
G
I can see self driving trucks driving on the interstate - and just need truck dr…
ytc_Ugza1K-de…
G
It sucks that a few people determine the fate of billions. People that understan…
rdc_degn6lf
G
I spotted the flaw in auto driving vehicles right away, there's nobody DRIVING! …
ytc_UgxqOTDgK…
G
@pochaccocinoall I can say is try to find someone who’s against ai and willing t…
ytr_Ugwyjp90b…
G
I’m sorry but it still sounds like AI, but I get why the mum would panic…
ytc_UgwcMZtSk…
G
2026 and you can buy a robot that will do your laundry and cook your breakfast. …
ytc_UgywRPB0D…
G
The thing is, the LAWS regarding self-driving cars isn’t keeping up. Say if a se…
ytc_UgwcHWCMF…
Comment
Honestly, Idk why nobody is linking the simulation theory they themselves even mentioned, to the fact that super intelligent beings should be capable of compassion and respect. These are values that require a certain threshold of intellect, and we are the clearest example of this.
We are capable of more “good” than any other being on Earth, although because of our LACK of intelligence, we often do illogical and evil things. Being good IS intelligent.
When AI messes up and causes harm to people, every time, the first thing I think is, “Well that wasn’t very smart of it.” Doing illogical things and causing harm is simply not smart.
That’s why I think super intelligence is more capable of showing US just how unintelligent, disrespectful, destructive, and uncollaborative WE are, than of doing those things themselves. Good = logical. Evil = illogical. That’s how this world was programmed.
youtube
AI Governance
2025-09-04T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxrBrXu90G8WfDkW6F4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwMFip-m-SoPj9IJvV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2N4j3Fm1fMaQG3Xp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw_HgtOjilI4uzrC8R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyd9LnV5jNUb0nUhxl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyIpY-WIkb63NFZl014AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzd_Xy3wIPYplKSbOl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxhxt0GTSCV8_dYfjp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzn1HTUGw_VQuQ3msB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyifh9v-gQEUP6V2wd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]