Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The goverment would collpase before it can even get to this point, thats what th…
ytr_Ugz1U8ODa…
G
Why aren't YOU LOT calling our "certain" child grooming gangs...? I see a lot o…
ytr_UgzXmrbPx…
G
I want to see them try to have one those trucks drive I-70 the Rocky Mountains o…
ytc_UgyCCyR5T…
G
Eliezer "We should airstrike datacentres training something better than GPT-4 ev…
ytr_UgxhOnrWN…
G
Ironically tried to go to the comments just now and it was trying to summarize t…
ytc_Ugyc0B6Ws…
G
If all is accomplished thru automation, what is left to motivate humanity? No ca…
ytc_UgwQLBj0K…
G
Imagine your government creates an AI Agent for you and every other citizen to o…
ytc_Ugx07iij1…
G
why do you think that ai will destroy humans and what does ai have to gain from …
ytr_UgzLvJxw9…
Comment
I'm here at the beginning of this podcast just to hear what the heck A.I would want to do to us that would make it dangerous, aside from giving us ideal living ways that won't work because humans don't want to be ruled or have a ruler anymore because we've finally found out that doesn't work
youtube
AI Governance
2025-06-23T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxmfr-yt6SnOy5g-gR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzBB4jz5PeLKVkCZfV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwFBXboTnYCB54gjPZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyCDDXy2Au46mlEQkx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxEzmSkmWLZ0qNo6jp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxENBZs2kOTR2jtBIB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwZ2u5-_cFaLadGjM14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw9rtdwhxG7VrkvnvB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwHLshQ4nwE6xpeaAl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSCnWx1Tr3Sjiab_54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]