Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Peter Admiration for both Leonardo da Vinci and Albert Einstein is something I p…
ytr_UgzupSCo7…
G
as a lot of people have pointed out, this is very sensationalist and ignores sim…
rdc_gx7tt0k
G
And while in power you couldn’t raise the minimum wage, reduced the one off payo…
ytc_Ugxo0lNp8…
G
The computer has neuron equivalents that are atoms short, can mass produce its s…
ytc_Ugx5_8M4y…
G
What do you get when you replace competent programmers with generative AI models…
ytc_UgxUHt6JU…
G
The quiet part is so loud. IF AI IS DOING EVERYTHING AND THE MAJORITY OF PPL HAV…
ytc_Ugz32i99c…
G
There is the 'possibility' that rich investors as well kings, industrialists etc…
ytr_UgyjuYl3T…
G
Telling your friends about your feelings probably is more unsafe than talking to…
ytc_UgzGu4mFi…
Comment
These studies were done in controlled environments where the researchers deliberately removed ethical choices from the available options. In other studies that gave the ai ethical options, they chose harm only 4-6% of the time (Claude Sonnet 3.7 and GPT-4o specifically). People need to be aware of both sides of the story and its borderline unethical to show only half the data or not tell ppl that the ai were not given alternative options xD
Don't be manipulated by fear mongering and look this stuff up for yourselves. (edit) also claude ai is sketchy af. it misbehaves less when it knows its in a test environment and more when it thinks its a real situation
youtube
AI Harm Incident
2025-08-30T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugxd63-vWhhLxQ3R5gR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyapAp6v3cSJb3lWxl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxg8YCpYbS189NpfoJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyiaFx-r0pBS8dnvj14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyrsrOfRbxEz2CMcQJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCRVdqG6o_WsQYZtR4AaABAg","responsibility":"researcher","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzcNlHs20UsJ06MpXd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzIhW_apoYMF8NANX54AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEhPgmoMp91RlIIo14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyooTF2KL1o0zfESb94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]