Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Take care of your children so that they don't have to confide in A.I. Chat Bots…
ytc_UgxfWrYb2…
G
Hey can I just say, I just now had an interview with someone who attempted to us…
ytc_Ugy8em9hw…
G
Im the innocent one. The chatbots have broken the filter multiple times, and idk…
ytc_UgywTJlKq…
G
Youtube can do better. Please address the the real problems PLEASE. Remove CP bo…
ytc_UgzYMcM_i…
G
How can anyone see it as ai being ok. From top to bottom every aspect us literal…
ytc_Ugw3TnKVI…
G
2:10 sort of unrelated, but its so fucking funny to me, that within our cult of …
ytc_UgxebxD0Y…
G
Ha ha ha. You made me laugh at the end of a long hard day. Thank you.…
ytr_Ugwuzy2-L…
G
That is the dumbest pro-AI argument I've ever heard. It's like saying using an a…
ytc_UgwmD-xur…
Comment
And, or course, the thing nobody talks about re: AI is...that we do NOT have to make it, or, at least, didn't have to even start in this direction in the first place. Like GD idiots we are VOLUNTARILY creating something that could destroy us, or minimally, really mess up our lives in ways that can't be reversed. It's all pretty silly when looking at the whole thing from the outside And I DO think that AI would have to agree with me, which proves that it is smarter than us and has been from the get-go. If AI had a head it would be shaking it in ironic disbelief...tsk tsk tsk.
youtube
Cross-Cultural
2025-12-09T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugxa4gSJ7rpGtRHK6yZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyoMSw5lMeQxwd2UUd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6h1wRoa6lZ6kAFrp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy8ia4d5Tb5Nqj1Z-d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyATSflRd7ZJLf7Jz54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy-vRfnjTxfLpF1EZt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy5qRGdx9CTsxcPqnl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz-Va44DPx_hf_HrQF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwUThrrNnc2qiVwbBF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwUdNBaoXFCH6QmLU54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]