Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
On any website anythink with AI i ad in to adblocker, so there is no single webs…
ytc_UgxcWgbpg…
G
If you really want to know how far AI is from human knowledge, then consider thi…
ytc_Ugz160cl2…
G
i say we start an art trend where we draw all these ai stans as pregnant lets se…
ytc_UgzaYQ9fM…
G
Nah if she was an AI she would know exactly how to deceive a human being by feig…
ytr_UgzSUHxlz…
G
@SusCalvin I'm not talking about books. Books are a small part of the industry. …
ytr_UgwSnuTQk…
G
Well, the creator of the AI not being sued for the actions the AI would be direc…
ytr_UgzC1vkMq…
G
Ai and ai companies are full of crap and I can’t wait for the bubble to burst……
ytc_UgyMoNJRS…
G
She's going to be in the future history books for AI bots. "And in 2024 Sophia t…
ytc_UgxLQaW8n…
Comment
1. Tesla can do the whole google thing in its app, showing photos of tricky situations the car has been in and asking a simple binary question every day. It can even be incentivized with free supercharge for every n questions answered. Bigger data = Better AI.
2. A "human driver" usually slows down the car knowingly or unknowingly when he is confused. AI must be able to do the same no matter the level of uncertainty
youtube
AI Harm Incident
2022-09-03T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxX-Ayn7yUJznfc0t54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-PlPlFxZMlrKl7L14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzvDnbzXfa5FBoo7Zl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwgOExhv5yZa_XdtpB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyoednUGu41_yo24md4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwANF7DrFOhKxikA-R4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8e0l4Nu2l9bYutH94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxG7tG4EnJHJkTtht4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzErz4Be_uJzuUAsAh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzncrNhH5J6ZBBj3N94AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"indifference"}
]