Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Actually the most insanely accurate description of how it feels to hold an AI ac…
ytr_Ugx8Y4-XR…
G
The AI one looks better than most of the human ones I’m gonna be honest…
ytc_Ugz21cI_M…
G
If each of us had a robot, as a personal advisor/friend, who is always there and…
ytc_UgwSGOv4v…
G
People were opposed to relatively autonomous robots that could kill without huma…
rdc_oho6lo6
G
Everyone go onto Congress.gov, enter your address and demand your senate represe…
ytc_UgxvO5uiG…
G
The moral of the story is don't forget to thank chatgpt after you finish your sh…
ytc_UgzXWm5p_…
G
Aware of what if a robot is programmed how would a programmer insert feelings or…
ytc_UgwrcmRNs…
G
@borealphoto Exponential technologies work in synergy with each other to not onl…
ytr_UgyupyOdL…
Comment
So AI will do morally ambiguous things for self preservation? Go figure. How's that any different than a human? AI aren't more evil than us; they're just like us.
youtube
AI Harm Incident
2025-07-27T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxpiLA1zq4Ppu5p15x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwpF6LjhH6AaLtygLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxv-FyaK1TeJBc_Zyt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyrWA4M48esA6RUaYx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgxN-7k2nynL9jwlXTF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyTEur-qpjJSebMDYZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxUn6_JiOkMTNsshOZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgygmrgwFTrGVpNuC9l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwH482PKgkFoJItJbx4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzdJDFxnFWmy3Vokt14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]