Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think EU actually banned policing with AI while the UK is about to roll out a …
ytr_Ugxv5o4dA…
G
Well as a service tech (I fix super market and industrial refrigeration) so when…
ytc_Ugz-z6rZw…
G
honestly, feel free to use ai, just know the environmental, moral, and legal ris…
ytc_UgzDXSvxb…
G
Ai should not be making any decisions related to Warfare!
Want to play a game?…
ytc_Ugwl76xLq…
G
Stop supporting companies with A.I platforms, no tax credits for A.I. agents onl…
ytc_Ugxx6C_De…
G
Prompt engineering is the true new skill- articulating precise English to the LL…
ytc_UgxABFjlG…
G
I believe that the reason there are no robots for the home yet is that most peop…
ytc_Ugy50I9LT…
G
ChatGPT is based on WOKE-lies and it think its the truth and the reality. Even i…
ytc_UgxH-iPtT…
Comment
But I think that a key point in emotion is the fact that we make decisions that are not purely cognitive, and that's the difference, that's going to be the difference. The AI is always going to make the reasonable decision based on cognition. Whereas a human may not. That's emotion. Therefore, the machine can have a cognitive response that looks like our emotional response. But it's coming from a very different place, it's coming from a pace of reason not emotion
youtube
AI Governance
2025-10-22T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxhOCQzwxbyr7hKraN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYqtJ-MBFJPPvuraN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxDg-LMZeREtXCKDh94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwQxoKgFnWaBc_Aiyl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyB645BQk0rM9CbzXR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwnrvMWXZG_1oMAjNF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwFh35U7dH9mqFQDIZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxX35whlfJ2_6sq_Gt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx0odXpYFb9uMEiRI94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyJl32IL5osqpRxqAh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]