Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I do think AI is quite useful for some things, but anything that takes away from…
ytc_Ugy1DxPiM…
G
Artificial intelligence is like humanity's final test of its humanity. I think t…
ytc_UgwjsxUu8…
G
This is a bit fishy to me. Chatgpt don't say nah or bro lol ever, I have spoken …
ytc_UgxNXyHMr…
G
Hinton sounds an alarm about a horse that has already bolted from the barn, some…
ytc_UgxCiAxhc…
G
He has thought so much about AI Safety and Simulation Theory, but he seems to ba…
ytc_UgxPS3J7a…
G
AI will take away white-collar jobs. Blue-collar jobs are safe as long as roboti…
ytc_UgwSoZfeQ…
G
Chat bots aren’t creative. They pull from knowledge that’s in their learning mod…
ytc_UgypqKSU0…
G
I like your points, very thorough thinking.
Personally, I have used AI Images to…
ytc_UgwOdmKCp…
Comment
Genuine question: why is no one focusing on AI literacy for middle or high school students? Not how to monetize AI, but how it actually works, its limits, biases, and failure modes. We teach kids how to use calculators without pretending they’re infallible—why not the same with AI?
youtube
AI Moral Status
2025-12-24T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzBdVB9XaoGwJ8_5T54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzg_t8yvysR0dnoEm14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz-pgK4jtM8wVudXAh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwygYyo3IIrJtnclF14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzd0dFcvOt6ljkb1DV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxvxrO-WB9-lWyNn654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw7KYgn5HkKknAtQGt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwS6BvbzcdkWgKxe1V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx3f4GUAVCXTK2jUPB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxN0v7FIEUGUQrLO8t4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}
]