Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Truck drivers are going to be wiped out too. Also, burger flippers and partially…
ytr_UgywAAF0G…
G
I watch this and see from the dashcam how the bike just came out of no ware
Sh…
ytc_UgzgJm2fo…
G
I knew I wasn't going to like what I heard in this video. I've been avoiding AI …
ytc_UgwX1RlSb…
G
Do you have a link for the AI weapons without needing to put an email to read th…
rdc_nq03w5n
G
This is not necessarily true, AI can mimick human behavior well enough that many…
rdc_ohvsezb
G
“The Microsoft cofounder published a seven-page letter on Tuesday, titled "The A…
rdc_jd57ngq
G
My fear is that it also makes people more trigger happy. Studies have shown that…
rdc_jfzrot0
G
i wish ai was a person so i could kill it or a bug- :D…
ytc_Ugx16LszH…
Comment
Will we ever get to a point where AI becomes more than... whatever it is right now? With the recent discoveries in computing technology by microsoft, is it possible to apply that sheer potential of massive computing ability to create something closer to true AGI? Or is it simply impossible, or should not be attempted for ethical reasons? I want to see more practical uses of AI other than to act as a cheap simulacrum of humanity.
youtube
AI Moral Status
2025-07-23T12:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxRgwd9GKDs1SlVBMB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw-3R7EvdhH-Iri39B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwn8aqEFjmR_lVB5uh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzjiGS-K1J898fr31p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyjSgA3kAmJ7U0xm7R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzJ1EC6g7BCXIH0OJ54AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx9U2z7jX0KNwLWG2x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzAWRtbc3syBqmVRGV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxoYk1EJuHxTPH9_Qt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzhodXJV4UXbNyluSx4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]