Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There are so many morons here that think alignment means “robot follow order of …
rdc_m9iq88i
G
@hectorcolon2563 Thank you for pointing out that this fight wasn't against a rob…
ytr_UgzJYsSu0…
G
This is done intentionally so that people can be programmed to believe AI videos…
ytc_UgyB_ECWl…
G
Denying a sentient being rights because of no flimsier a reason than "We made it…
ytc_UghzYjspM…
G
She was flickering his eyes . she want to dominate the human being and his smlie…
ytc_Ugw-qLTxx…
G
First they lost the jobs in their home country to workers from India, now they l…
ytc_Ugz-UUXvx…
G
I am not an artist at all, but frequently work on personal projects that require…
ytc_UgyvwDOTx…
G
There are two laws of intelligence: 1. Intelligence, whether human or artificial…
ytc_UgwONVhRh…
Comment
honestly the flip side of things that people don't realise is the most evil people who have no care about humanity are the ones in charge and the biggest threat, is it even reasonable to assume that AI would be worse than those people? Because these people are psychopaths that lack the ability to comprehend empathy, and if AI becomes smart enough, there's no reason to say that they also would be unable to comprehend it if they learned from us.
youtube
AI Governance
2023-07-07T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxgOO0o8rYcCbHE_1l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy3FlxL_yyTpKA266J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPoAx2MV81q9FH48J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyAVLCjC2FSPiLDcwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyevAa5KtDnj8OU4b94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzyQiQqu6vPKYgtwAd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwUKbDSKtuqlgq2yrR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTAGqXWdHBOrV1WlV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzVaAT2SgG4rJyoMxZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_BDnYNmQLiduU12l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]