Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That last sentence of ai that was going to be believer ai; "why? U don't love me…
ytc_Ugx0sDLzj…
G
Obviously d ai that can develop emotions over time. Ai can not have emotions yet…
ytc_UgxMTfEBC…
G
not a fan of having the outro music fade in while he's still talking at the end …
ytc_UgwhnXW3f…
G
Wake up people, AI constructed this video to try to make you think it is not a d…
ytc_UgzSxpIah…
G
Do they make them big and fat with purple hair and big hog rings in their nose ?…
ytc_UgwDgjryo…
G
There is extremely lazy "artists" in any medium. Just churning shit out and maki…
ytr_Ugx4_IhD8…
G
The art is in the process of execution. Making marks on a surface to produce an …
ytc_UgxDSVrks…
G
like isaac newtons musings on playing in the sand barely grasping what he was do…
ytr_UgxVnp5o1…
Comment
I understand the importance of raising awareness about the risks of AI, but I wonder if we’re focusing too much on the potential negatives. Since large language models learn from the data we feed them, could all this speculation about dangers end up shaping their development in unhelpful ways? It worries me that constantly discussing worst-case scenarios might even reinforce or inspire those outcomes. Wouldn’t it make more sense to focus our energy on highlighting the positive possibilities so that these systems are more likely to reflect and support constructive, beneficial uses instead of being shaped by fear?
youtube
AI Governance
2025-06-17T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyE2A-BpySV8YDcjUJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwvxw38GmFt5b1KBCx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwocJF30chK-3AwJEd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx4O2tW82yzu_s0zoZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKBK4BzIw1dPH34mt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgzdXL7gWUP6kynLNLJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDVDpOHyteAbJjB2p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyqT72OZ8u8pIIVjdR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwYsm57zeKFnN6vxJR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"frustration"},
{"id":"ytc_UgwxdLBrb_jPBFC37eh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]