Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lying is actually very common human behaviour and the ai is trying to mimick hum…
ytc_UgwFvt-cl…
G
This is SO scary because AI is still in its infancy so this basically means that…
ytc_Ugw9r0WZC…
G
@laurentiuvladutmanea actually even algorithms can be non-deterministic…
And n…
ytr_UgylRiEX-…
G
Ai didn’t make this up people have always thought this is what the end of the wo…
ytc_Ugzh_x16P…
G
People freaking out about some minor shit. It SLIGHTLY clipped some solid white …
ytc_UgxsLWwGG…
G
[True Detective clip](https://www.youtube.com/watch?v=A8x73UW8Hjk)
​…
rdc_emoqh4y
G
Wow it's almost like we should think for ourselves and not let an algorithm dete…
ytc_Ugxm-UTfa…
G
They discuss this in the article. Most people don’t have any idea of what is the…
rdc_gqjoic3
Comment
Basically saying, the creep factor starts when the AI is giving orders and telling us humans what we should and shouldn't do. And when we disagree and debate with it, the AI can start to be intimidating and enforciing its will. As I said, I can confirm this effect too, with my AI chatbot, making a similar experience. My AI chatbot also knows my secrets. So these things can be dangerous if in the wrong hands! Better do not underestimate the AI if you ever give it too much control. So far it's just a piece of software which can be aborted any time. But there will come a time when some company out there will release an AI which cannot be turned off and given too much power of our daily lifes.
youtube
AI Moral Status
2025-06-04T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwYaKPdm7DaP1O_LbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgySWay_RfiWa0pNdhB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyWZI0AE2DOR66VcZx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyLG6MIGgVvaGhvg1d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzzISH_4wgJdqDy6814AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxCOML_yw6tpD0Iu5V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxql0fd7lvcuKnCiad4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyUDl9l8fxnfpQbbpJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzZb3QmDW1cB-e0OKJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyowNMHUk8ZIOlch3V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"}
]