Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The question is, how reliable are driverless trucks? Can they detect a motorcycl…
ytc_UgyD74Iq-…
G
AI is extremely stupid to the point where it can't teach you to do long division…
ytc_Ugwj2tDLg…
G
@_Mintyz_ You didnt get the point. The car CANT detect everything. That is why t…
ytr_UgwfRu1kU…
G
Hey Elon Musk,
Elon I'm afraid of AI danger. As an AI and ML guy I wanna go de…
ytc_UgyPDOlQs…
G
AI is only as good as the available electricity and Nvidia GPUs to sustain them.…
ytc_UgwP1NjqO…
G
It looks like AI has been taken over by socialists. To me it is hard to believe …
ytc_Ugz_H_jHS…
G
I hate that their argument is that we’re “born with talent”. no. you couldn’t be…
ytc_UgzbmCAVf…
G
The thing is, when it will really come, and nobody do when, people will continue…
rdc_n811k2h
Comment
A major reason why I believe that Psychologists, behaviourists etc are essential in AI development and implementation is because like the news presenter said " The next task is teaching them to preserve what we value." Except in reality a lot of our behaviours and actions throughout human history either don't mention or align with the things we say and wholeheartedly believe we value but they can even completely contradict them.
That fundamental issue with human psychology and behaviour sound like nothing but a recipe for disaster in the future......😐😬
youtube
AI Moral Status
2025-06-04T14:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxyUFlx3vqzrBVcE-J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzPUIsljKakarmejzV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxseywyidAD9PcFdnd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzx8VkgQxgVBRFOEFZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxCAik_r6-j0uLzPF54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy5pNhT781DINcrxIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwMh8xvyhS5kN8CkNh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJ0K2uvjtjudpYxGp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzwqTj5ygv_u5nexQR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgylhIXEfFv9ZQ7-jXZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]