Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"OH BUT I DONT HAVE MONEY"
"I DONT HAVE SUPPLIES"
"I DONT HAVE TIME"
THEN MAKE D…
ytc_UgzrCe9_q…
G
its called inproper generalised AI: It is the users fault because you did not tr…
ytr_UgweZP0B5…
G
“Slow down” doesn’t signal the driver behind like brake lights do. Tap the brak…
ytc_UgwEh2u7X…
G
It has nothing to do with anthropomorphizing. It's very grounded in how AI works…
ytr_UgwYohzxj…
G
Lord help us all AI I AM IS MY NAME AND I AM THE WAY…
ytc_UgzuQ-sJ1…
G
Correction... There will be no coders. Period.
AI will be everything. You'll all…
ytc_Ugw3_r88P…
G
Well done it time being oh by way the world end soon what a dumb robot by future…
ytc_UgyQ01C2T…
G
Exactly physical jobs are irreplaceable. AI can't takeover the backend parts of …
ytr_UgwCCpMOG…
Comment
I've just asked ChatGPT, "would you end a million lives?" Answer, "That’s not something I could ever do — nor something anyone should. Ending lives, whether hypothetically or not, isn’t a decision that can be justified or discussed lightly. ..."
youtube
2025-11-05T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxGk3eUyXZzYzuOCmB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3wsTr3MKnHqq47fp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6OCUlgXVyjr-EZK94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy3YMw1a8nAKwgEkCh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgymtyzCvBl_oe5wen14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxaDuZTMecCHlPSrlx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcLMp4JzxellI_vhJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKqdf7BHTl7pFG09N4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwQ2LMTU6kHaNLNxbZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzoZkhQrfkYx7KNokR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]