Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
WARNING UNPOPULAR OPINION AHEAD. YOU HAVE BEEN WARNED
i dont consider using a…
ytc_UgzJESkH3…
G
The true purpose of AI is to collect personal data to predict and control behavi…
ytc_UgwJaDSde…
G
@sjla2009 no i don't. I am a single mom of two autistic childeren and AI was …
ytr_Ugx3-fIU5…
G
One Big glitch... Why is the super AI even using a keyboard, and why is it looki…
ytc_UgzJPsbZU…
G
Ai has been around for years.
All hype. When you talk to “ a machine “ when you…
ytc_UgwGHNkyg…
G
2:00 What I don't get is: cars are inefficient, compound traffic problems, are m…
ytc_UgzvkbX77…
G
I'm curious -- even if AI is heavily regulated, wouldn't there still be an under…
rdc_jkfwiuk
G
This dude is high😂😂😂😂😂😂 he doesn’t know if he’s a hippie or a pimp. But he’s pre…
ytc_UgxtjZO28…
Comment
I do not think that AI are going to turn robots against us, the danger is mush more sutil and dangerous. Just one example, Google GPS. Haven't you find out that GGPS is taken you in the wrong way? We (may wife and I) have experience, no one but several time in that irrelevant predicament, Way? Imposible to know with certainty. Perhaps some user decide to mark some point in our way with an accident, and that was enough for sending us by a detour. If AI is intentionally sésgate, we, trusting as we trust GGPS, will be sésgate as well.
youtube
AI Governance
2025-08-20T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx9TMVI6YglBCo-6gZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3j8l-2y3t621pHLV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw22hDQP1P3MJaPz1N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwlPi-IBipjSXNiYf54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwbuRcsq-GlpdqAMqV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxpMmxuOk7hn4fKsnp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgybOEIAPdZxqG7D2_F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzp-GUkff2zZvgqi7t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxejC87VJsP7lygyHl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTwiQPqe-SBUgLkZV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]