Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Once the Full-Self Driving becomes Unsupervised, I hope Tesla creates a "FSD Sup…
ytr_UgwwZZ5IG…
G
The easiest way to collapse the whole planet is to add a billion dollars to ever…
ytc_UgzE6nmX8…
G
Automation should lead to all humans on the planet having a better life not just…
ytc_UgwOulX7K…
G
I'll admit, I am not a fan of Tesla, but the fact is, I have never seen on video…
ytc_UgxNUOOPx…
G
Look at all that amazing art that was created because of a AI reference photo. P…
ytc_Ugx0VtFnl…
G
I understand your concerns! In the video, Sophia emphasizes her desire to learn …
ytr_UgwzEgAU8…
G
13:35 ChatGPT is already a liar, why can't it be lying about not being conscious…
ytc_UgyL4TMMb…
G
The AI version really does feel like it's dumbing her down to "sexy wolf girl" a…
ytr_Ugx9FUbuN…
Comment
Do this exact same thing but instead of just being able to respond in one word have them stick to short statements. I literally gotten different answers for the same questions. What’s cool about ChatGPT is that you can have them take you deeper for these responses and have them clarify what they mean. I’m sure just saying yes no, maybe, so; is way more simpler by technically saying “yes” when the answer is probably “not really it’s more complicated let me explain why blah blah blah”
youtube
AI Moral Status
2025-08-04T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzmuaY3tngH0uygSPl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxTQa3SZ4hwBH-noKl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzlklqEAcRMJozqrO54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxXhRubTf2agwZ87JR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwtXxFMmWEbwlYXcsl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzWkHopk7_osTNuhLp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyqwO5U5EU9h7edSb94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwjPZTxT1toFU9Z2kB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyUOMPQbWNYoG0ZyJh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyoU8fhAscR8zuxz3d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]