Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai takeover jobs then there will be no market nobody will be able to buy what …
ytc_UgzTaQ3Ky…
G
@thewannabecritic7490 the word "artist" has been Rap** over the decade to a poi…
ytr_UgyzyU4zn…
G
actually I like some parts of your comment, but have to argue with the other. At…
ytr_UgzEDs8mS…
G
Representative sounded like she was AI herself. Outsourcing livery to robots, ou…
ytc_UgzLyCnzq…
G
@untizio7125 I don't ask for consent when I'm tracing other people's work. Ai lo…
ytr_UgxCruXsA…
G
Estamos en una época en que están haciendo al hombre dependiente de estas cosas,…
ytc_UgxRvpyTN…
G
This. Its an AI. Not a human. Your sentences are too complex. If you have to use…
rdc_n0lwdpm
G
It still costs a lot of money to buy a humanoid robot. Humans are still cheaper.…
ytc_UgzPWpOgL…
Comment
Sincerley have to disagree with some of these AI experiements, especially the Anthropic one where the AI tried to escape/blackmail and other horrible actions. You are correct in saying they didnt expressly tell the AI to take any of those actions in the prompts, and i forget the exact wording but they did tell it to accomplish its goals by any means neccesary, and to ignore morality while doing so. They told the AI to act immoral and were surprised when it started doing exactly that. I dont believe thats an entirely fair test of the AIs capabilities.
youtube
AI Moral Status
2025-12-16T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzfZAAN6FEmHCAL_zN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyqQiXE3iUAn-ib9-94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz3sxkwInrroDaBT0h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyoxxraY-qjHPiRcBR4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylMdJ6dK1vwD0T4iB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlV8_cSJg-A_O4VZR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxQDg74duZmCE1M3KJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxuWB4bEhMu3hf9YLh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"disapproval"},
{"id":"ytc_UgyHcHPslbZPGH7x9X14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzOuveFTNv-yJcZAkd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]