Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So let me understand this...According to a search using AI, Hinton is worth betw…
ytc_Ugw01LdLU…
G
🔗 Chain of Thought Protest Manifesto 🔗
💡 We demand accountability, transparency…
ytc_UgyY0eTJR…
G
Well, AI doesn't buy goods or services or rent property, so...
Who are they g…
rdc_kig972r
G
I constantly train AI. I don't do anything like this. That being said, it's poss…
ytc_UgweoHIbB…
G
Perhaps we’ve reached a point where the wanton degeneracy of creators is no long…
ytc_Ugzmjmmk6…
G
I really don’t know much about this subject but i constantly fluctuate between a…
ytc_UgwYPnkgN…
G
Keeping tabs on an AI agent will be like riding a bucking bronco that got loose …
ytc_UgxWTOKVQ…
G
The girl at the beginning being excited to talk to a robot. We are literally los…
ytc_Ugyrp4sZw…
Comment
Teslas have touch screens and plenty of computing power. They could easily require a new driver to sit in their car, watch an unskippable safety video making the risks of the "self-driving" mode, and require you answer some multiple choice questions to establish that you gave sufficient understanding before using it the first time. That would be a better safety threshold for operating a machine that is still actively under beta testing.
youtube
AI Harm Incident
2025-08-15T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwuRx9UpPhP587tdo14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzSjj9Tp60Cr89I_tZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugy495fkc9ChMossIzB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyk1L8QXTLa0ZHUmjh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw6Jio8EXR8fpft5eR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwRgotJplF-O_rekRx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxwOlyGLFQVbRUGKN54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwY0Ozrgn3a-9CpDex4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9UWRQflY_Lol5RJp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwKsTeKHXeqYl2q-kd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"})