Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I really have a bad timing, I finally decided to follow my dreams and be an illu…
ytc_UgytxL5kW…
G
Good breakdown. But you forgot to include: false diialectic / false dichotomy ak…
ytc_UgxRuO7oI…
G
The most fucked up is that there is a demand to fill, there will always be someo…
ytc_Ugwd4IfVZ…
G
@ No, if you ask it different questions different time. It's not about pressing …
ytr_UgzZiJdZ8…
G
Too bad this AI guy leaks political bias in his rationality. It makes me questio…
ytc_UgwoDVBef…
G
Your own nuclear arsenal isn't a deterrent for them because they only give a fuc…
rdc_dl17o6y
G
Silicon Valley are using the means of greedy consumers within the system of capi…
ytc_UgzqdqHJx…
G
well you could say that ai is not the future but a future not one that I would l…
ytc_Ugx9G67fs…
Comment
Why does nobody think about security... AI is nothing else than learning from examples ... lots of examples... and then do some statistics about possibilities.... but what when somebody starts to feed the AI with false information ... more false information than formerly trained..... Then suddently a mouse becomes a dog... at least that would be the least problem...
Yes human make mistakes ... but they cannot be manipulated (ok, maybe with a lot of money 🫣)
youtube
AI Jobs
2026-02-09T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwB_FujTtz1FJ_QKAd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"approval"},
{"id":"ytc_UgzjCKBQnwNHxavIvAB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzAulysMiaHjzJWvs14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw-gjCTHWI9SFuSQP54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzO1IhC0SkmRH2nhJt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxg-98Fx6y2XP9-WiR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwfojKoYnAZqIEy8TJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx7cOBsZRQGMQJRmFR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyiekd1swAeKVlELYR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx1rNAbtJGZDpLB93d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]