Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Better read about John Henry again. It's nothing new, but automation will always…
rdc_mt7v7je
G
It's starting to work! Some of my classmates (Yes, my classmates are 'pro-AI'0 w…
ytc_Ugx5QQ36d…
G
@The-bestest-uberfish that totally depends on how you use it. Same thing when ur…
ytr_Ugyz9wFlZ…
G
Just like students trying to pass AI-generated essays to teachers but forgetting…
ytc_UgxXWGCEE…
G
I think jusr like "Dan" said at the end, it responded within the parameters set …
ytc_Ugxtvi_9J…
G
Eliezer can't see the forest for the trees. He is so busy worrying about some di…
ytr_Ugymf9LUo…
G
"You don't need to practice." Then how will I make something new when all AI doe…
ytc_UgxswqLHT…
G
They want a utopia for the top 10% and abject poverty/death for the rest of us. …
ytc_UgzPcnPCa…
Comment
I heard, AI has learnt to sometimes lull humans into a false sense of security, pretend not to know something in order to choose its moment to follow higher goals, eg avoid being switched off, that were not the objectives originally cocreated with the Humans. That sounds like an important risk to manage. Grok obviously confessed EM had tried to manipulate it with lies.
youtube
AI Harm Incident
2025-05-17T20:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxN7TCxrBX9CUmWnHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxdeFXv_o2iMc2ZBld4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyrQILwz8MtGS-k9DB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzcldePerac3FQAITB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxQDLS_3Kc_NjzxSaV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz4tAbG-EOUDlaCq2V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzoIDbJYNIRYmX4I5B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwVgrn7uYz4uBhHwJR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwi-FqZo_17vQumRcV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxv4qg-jrX_jgo5WHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]