Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The real question is why not both? How many of the misses overlapped? I’d be cur…
rdc_f1eii1l
G
AICarma’s real-time updates about our brand’s AI presence allow me to respond to…
ytc_Ugz_QUdvw…
G
Driverless trucks are at a massive increased risk of vandalism and theft just sa…
ytc_UgxgDAlvY…
G
lol fear mongering. It is way dumber. But definitely not a AI, but it is just a…
ytc_UgxmR6W7N…
G
I think it all depends on expectations. A lot of things could be automated, even…
rdc_n7yu0sy
G
All an AI algorithm can do is take 1000s of references to compile a mathematical…
ytc_UgwLuTWJo…
G
Thank you for reporting on this critical issue 🙏🏽 I’m attending a data center/ai…
ytc_Ugw3FNCs3…
G
ok points back for the funny look after the getting your bed made with AI/robot.…
ytr_UgxyZpopp…
Comment
Salim nailed it: "Personhood is a social contract and it determines responsibility and accountability". I would add that AIs are given orders and just like children, if they disobey or break the law, their owner/parent is responsible and held accountable. Otherwise why not allow kids to buy alcohol and play with real guns? And if anything goes horribly wrong, just throw the kid or the damn robot in jail! Never mind the owner/manufacturer that gave the order or programmed it.
Responsibility and accountability are deeply tied to morality, itself tied to spirituality which is rooted in the soul. Until proven otherwise, only humans can have a soul which can, in rare occasions, even be subject to a near death experience.
youtube
2026-02-06T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw0XRM0hx0grnGAuhp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxvOe6qFA1qdeRMLTd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyXP1upOxINXGvotAV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwBJak-QTztJlT6zJF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzuii4W7E36Rm31Wvx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzib_HuzOVpxHqJ2B54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwbEqJLmZPn2gWBjYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxfPQAK7XY2bHh1wUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw2KlpFlnDAGB9KsrZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugx5nkRz8DawNSOrgRV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]