Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Gene Roddenberry Star Trek 1 we must merge with AI and technology we must become…
ytc_UgxZIXDC2…
G
A driverless car does not mean u can sit at the wheel knitting your old man a sc…
ytc_UgwpojYDd…
G
No outcry when these AI execs talk about wanting to wipe out the white collar wo…
rdc_o5sed36
G
Believe or not but there is videos of supposed time traveler come from future, a…
ytc_UgyhEUfui…
G
I was blocked out of using chatGpt for questioning it's opinion that gender is f…
ytc_UgyrO6_CV…
G
You would think Elon and the likes would be happy to keep people in work,how muc…
ytc_UgyK7vErd…
G
Sorry Drew, the Claude 4 was an experiment. The reason they remove the code of "…
ytc_Ugy3XJnMj…
G
He wanna record so bad, can't pull up the app to get it fixed🙄just fyi people. …
ytc_UgzM6sBRK…
Comment
What does general AI want, what would be its goal and motivation. I understand why it might see us as a potential threat or pest, but what exactly would it try to achieve past pacifying us. Would it be something relatable, like just to survive and prolifirate or something completely alien to us? And why is every scenario discussed always doom for humanity. Neanderthals did not reach the complexity or mastery that humanity did before going extinct, but they are still sort of here, inside the DNA of most of us and whatever skills they taught our ancestors. I just hope that if humans go out, we can work well enough with AI that humanity continues on with our creation.
youtube
AI Governance
2023-07-09T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz1vZ476Sm1Nff4HHl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZbEnYRITSPEzzfNp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz_7ScVMwd1x5qp59d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxjOpITEQE4VwlOsQN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzHY-799Ce6_s10wkl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwOG9xbtBYDm17OOul4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx-hTWi8px3of0Ku3h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJI-0O2P_As1352NJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQH_QjeTXhP8bGH_R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw5409cgTfOCCBcEjh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]