Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Precisely. I'd rather look at a hundred terrible crayon drawings than one "prett…
ytr_UgxlQj_BA…
G
Charlie gets to the heart of the issue when he clarifies that the AI used by thi…
ytc_Ugwc3HswX…
G
Yup. A mind with no volition is not true artificial intelligence, it is a façade…
ytr_Ugjq1LHcz…
G
Even if for verification purposes, why would I even want to test its capacity to…
ytc_UgzSNeRJB…
G
As an artist who draws strictly with paper, it's bad for the environment yes but…
ytc_UgxTX8al1…
G
Stronger AI in the future may not be fooled by this. Just saying considering hig…
ytc_UgxUtFPt5…
G
If i see that robot i wil punch ✊ him and I will slap her head hehehe…
ytc_UgxopcyWv…
G
AI baby... literally they make all the money in the world and they still laying …
ytc_UgyYJvcOq…
Comment
I like the idea of tracking failures but his examples are not persuasive. If AI has no direct control over the physical world, it will generally have a low danger level. In other words if it is an information advisor, it could be kept safe. However as AI takes over sensitive systems, it will have a higher risk.
https://arxiv.org/pdf/1610.07997
youtube
2024-06-17T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyY_8W2NHA3-iLHw3R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwfuLTgVUoQaHVETpd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgycSLpuMuZzZmCN7p94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugypybs6otRb8oPRQV54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzZzYpW2Th7hoRYqIh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugw2LjoLgsnb2Lzizz14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"ytc_Ugw-9tnkRZiyZm8hsSF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugwgz--wnqVXh4vrpB14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugx2YKBxnZGTBPpuXhl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugwx2Zz34hKV4v-oNXt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}]