Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Maybe it will be nice to have a sugar daddy AI; be its favoured pet. Who knows?!…
ytc_UgxF4kpWe…
G
Fast forward - now they're saying AI top ticked and the bubble is about to burst…
ytc_Ugyjbqaxv…
G
AI is under the umbrella of Source creation. Everything is. It can be helpful to…
ytc_UgwHzq46b…
G
I feel privileged to have witnessed this conversation. I am not one for small ta…
ytc_UgzzT4R5X…
G
“AI” (calling itself “Starfish” here) attempts to completely mislead and destroy…
ytr_UgzFLjc8r…
G
All I'm hearing is "even simple AI is dangerous". Either way, though --- even if…
ytr_Ugz5J05nh…
G
Courts? This is the least issue with AI, as court forensics are pretty good at f…
rdc_o5qn847
G
At what point could we interview ai or super intelligence.. that is assuming it …
ytc_UgycyCxgT…
Comment
If a robot became conscious and somehow develop emotions then yes they deserve rights. I'd suggest reprogramming them before any more do.
youtube
AI Moral Status
2017-02-23T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugj9uA4E2qdNfHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjnOffaiIS5qHgCoAEC","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugj98md7zFOMrHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgitNrH9VLI5X3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Uggkdf3AcQC3ZHgCoAEC","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggqumG_AwEw_ngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggJ3-NtmsdA4ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UghacdqQa_8JXXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UggZ2aPEfECZoXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgiBCDn6kZ0PaHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]