Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
God bless us all because ai is not going to work bc kids can't just learn from …
ytc_UgweqhEzp…
G
Yea.. that’s why we should poison every form of media out there to stop the ai, …
ytc_UgyaiS38H…
G
There are many good uses for AI that can help bridge gaps. But there is A LOT t…
ytc_UgyxKPE7U…
G
3:15 yes! And a higher workload is actually more taxing than a lower one. The A…
ytc_UgxC-esFS…
G
This is great for politicians. They can say any bit shit crazy thing they want a…
ytc_UgzfFfKNU…
G
I think AI in education is useful only if you’re using it as a study aid and not…
ytc_Ugypoxbj2…
G
Hey can you update on disney recent decision of having ai on disney plus. Becaus…
ytc_UgxKHtzTM…
G
Please define “thinking for itself”. Your entire post is literally meaningless i…
rdc_mzwqggy
Comment
I asked ChatGPT about this. The “rules” in the video force it to give short, spooky words, so it sounds like prophecy when it’s really just pattern-matching. The “2032” bit was formatting, not a hidden message. Basically, it’s more Magic 8 Ball than secret truth.
youtube
AI Moral Status
2025-08-26T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz8HVhC4s97-qxkWOd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2cZU0eByd5XS2CvJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzZ96Rzz4j15FkaJft4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgweOSgmYXzg-_C18ul4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz0GX2WTlpIdLoWorp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGCgjmBZP6-ftGH3R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzJPUx8a4mHdJuIbTV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxMDxL5MbisOxvj08V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgywMtVEiU4wvrFYcdR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxThZ3f7WtnPZIqYOJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]