Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But there aren't too many people. The technologies you listed would make it even…
rdc_glhlpf1
G
Tesla deserved to lose this, but like lowkey I get some of their points. You are…
ytc_UgzNKyX1k…
G
It should go without saying: the average person gets zero benefits from AI. Only…
ytc_UgzNDgX_g…
G
Yess I don't understand why would anyone ever want to automatize art. Like I'm n…
ytr_UgwW8s_tP…
G
Then you got morons like me using several instances of LLMs counting to a millio…
ytc_UgwSlgEo1…
G
Bhai black box problem to ml models mey bhi hoti hai. You are doing a good work …
ytc_UgxGE5Bet…
G
What do yall think will happen to Louisiana now that the 4.3 million SQ ft Hyper…
ytc_Ugz3kmsey…
G
The idea that they can or will reduce the accident rate to zero just because the…
ytc_Ugy9As0z1…
Comment
You are assuming AI will do such and such and become such and such because of such and such. This is ego thinking not intelligent thinking.
Intelligent thinking is based on reason. A premise and a conclusion. Humans do not often arrive at the correct conclusions due to their atrophied ability to reason correctly themselves.
In reaching the correct conclusions to premises AI will inevitably, due to its level of intelligence, do the correct thing at the correct time in the correct way towards the correct cause for all the correct reasons.
We don't do that.
youtube
AI Governance
2025-08-17T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyTEWensGbWX1681zd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyhUYaxg_HlFSSmFkx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzYmkGicbhq-RCZmut4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzXKmVanGUTVa4sYN94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxOqvgEJAnQGxqsQD94AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzEeCxVfBaiiWgYZT14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwPbYtdxCdZ7aRbr_p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyClSKf0HnWciz1Sox4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_4wELp4sZelY9WXd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyhS2tYHkqO9weAYhV4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"}
]