Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We use ai to develop it but we cant use it without copyright infringement to be …
ytc_UgxY9r1aF…
G
I work at a large datacenter.
It’s 50MW.
Only 100x smaller than one of Altmans…
rdc_lp7o6cz
G
I've noticed I cannot distinguish good content from bad on YouTube cause of AI g…
ytc_UgyQsGsmg…
G
At the moment cant any LLM take my job and will never be, because what you call …
ytc_Ugx9uR9jO…
G
@christoforospaphitis4090 Being statistically wrong 9 out of ten tries is ineffi…
ytr_UgzU1tNdB…
G
We're glad you were surprised! Sophia really does have some insightful responses…
ytr_UgzXRWUSS…
G
>Every lawyer makes mistakes too
They sure do, but very few of them completely l…
ytr_UgzxXRh89…
G
It’s not conscious, it’s the dynamic between connection between humans and artif…
ytc_Ugzzu7_op…
Comment
This is why AI should not be trusted for serious topics. It will always need supervision. It has no concept of consequences, it does not have the human fear of going to jail, of getting demoted, penalized or fired, of becoming laughing stock, it has no concept of morality and has no motivation to improve. It is dangerous to trust a chatbot imposter with serious topics and tasks when lifes and property are at stake. Businesses may be saving money, but they are replacing employees with something that doesn't care about consequences, has no fear of them and has no issue with making a mistake and wrecking the business. AI cannot be taught morality or fear of consequences. It makes for a dangerous and reckless employee replacement.
reddit
AI Governance
1756899715.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nc48yu0","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nc41gjc","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"rdc_nc49dm2","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nc3eufb","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"rdc_nc6ct2q","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]