Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Stephen Wolfram, in the bits I managed to listen to, sounds just like what I'd i…
ytc_UgyMqi1Hz…
G
There are senior content creators that LOVE ai. That floors me. I thought wisdom…
ytr_Ugw3Hn1Xl…
G
For the last couple of years since AI became common place I ofter ask myself "W…
ytc_Ugw_dShxj…
G
I've already cried. I've already thought about options for finding jobs in other…
ytc_UgxloSLOp…
G
Hey Google. Make a video of some guy with long hair explaining why AI art isn't …
ytc_Ugzt90dYj…
G
I may not have a complete understanding of what this is, and more than likely th…
rdc_icifnqo
G
@romanpyatibratov4361 That kind of training would likely take years just to get…
ytr_UgwPp4hDO…
G
Maybe the USA is different but usually companies want someone with a degree so t…
ytc_UgxlzhePX…
Comment
The robot revolution is a real thing, just not in the way people think. First they'll take our jobs, then "mistakes" start happening and people start dying
youtube
AI Responsibility
2025-02-28T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwwwxR-tZJThdk_8Ch4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwcyB92_HZRHUxa0754AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxqGfvkiP-cgW9MC6V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTikEUY_3MXrDS-KZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxc-zcKvrvbUpqBhOV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz27_seeRTmff2teoN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxfl7H3I6PxjqTNO0p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxVbUUuEC4dnZlwDjt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxvB4YuVfLdLLBye_Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz7Kah6JuAG1jGV8dJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}
]