Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
idk why we're making ai do the stuff that's actually enjoyable to do,
where're t…
ytc_UgwSgmvF1…
G
EXACTLY, all the creative decisions are directly in the prompt, not the AI image…
ytr_UgwWtbmsP…
G
The big limit to AI is energy. There will be a war over energy before AI ever de…
ytc_Ugz4LciTL…
G
The funniest part about this is the AI is willing to cut peter off for clarifica…
ytc_Ugyup1quA…
G
Stop talking about alchemy. You don’t know what it is. You were completely talki…
ytc_UgzPzpx6k…
G
I can't believe that people think that this robot is real. This looks like a v…
ytc_UgwMxTSlO…
G
Okay thank you for taking the time, though I'm more curious about the "identify …
rdc_gm1i11h
G
the definition of AI means equivalent to human, super intelligence is not AI as …
ytc_UgzOMxtnA…
Comment
Surely as we approach super intelligence, we could also get super intelligent AI to help come up with solutions for how to deal with the job losses and the redundancy of human held positions. As AI gets smarter and smarter, it will be able to come up with better solutions than we could. I don't think the danger is the AIs themselves. I think the real dangers come from the super rich sociopaths who want to own everything and profit from everything. The AIs that we fear are the one thing that will become smart enough to figure out a solution but if that solution conflicts with the super rich people's goals of ever increasing wealth then that, if anything will be the spark that ignites conflict between AI and humans.
youtube
Cross-Cultural
2025-11-08T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxzMMfSWt9DWlSukTh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9vGjogcF3ruUo1dt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyNmVaLIbOETmi63dh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyOIOmarGIRKyLtZjl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz-N18A_YSZKMyFYIR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMi-TYth6Ircjwnlh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugys17yEvmywAplVKqV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx2AphaY1y2w_pjuZB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEX44Sior51IvBAT94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxUdQHmaPL1Uj6VZu94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]