Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If most all human beings become unemployed because of AI, how do they expect peo…
ytc_UgyvvMli9…
G
I'd actually love if AI would purposefully give wrong answers if it leads to a r…
ytc_Ugx0eEqEU…
G
There's already ads running for "smart" AI companies that fix the mistakes of th…
ytc_UgxI6jom3…
G
No AI deserves any human rights. This must be extended to those that cyber and/o…
ytc_Ugwqoed4l…
G
Humans do the same but we humans have been thought just do what asked. Well ai i…
ytc_UgymuDqPN…
G
Will replacing a programmer with AI isn't there yet,
I use AI as a programming …
ytc_Ugxq0AMNl…
G
This kind of specialized ChatGPT is exactly what we are building at Remble! 🙂 we…
ytr_UgwJYc94o…
G
@TheLogicalFallacyAnt Is it really logically good? it seems like a false equiliv…
ytr_Ugyz-Yhk2…
Comment
This sounds dangerously trying too hard to be likeable.
I fear it's all bs data and not actually true stories being reported against ai usage.
Sure it all checks out but still too good to be true kinda feeling
No company is gonna admit default they'll just wait ai progress to be able to make up for its previous mistakes.
Most of the failure cases highlighted here were from 1-3 yrs ago.
youtube
AI Jobs
2026-02-09T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwvT09GwTFTyZlAIZJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_p0bXBoSpC8qT_N94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwRhJLni4uzh02qI714AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxUWgF9fpEVrC_iLeF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxIwhoyz0iDIDhyevN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxykKH816LnbeQ5zTF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyhqMLNSS3pc0u1F7t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzgsBUM3_ln3GLyyrZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwHKLZrrqeG7IttqZt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzYXyNxZas5YWX1UV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]