Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@kerwynpkits really not about artstyle. Also that argument about training off …
ytr_UgwiMouJG…
G
I am not afraid at the moment.
Achieving the current intelligence came by scalin…
ytc_Ugwv_YYOb…
G
@rickyticky3350 I'll give you as much that AI can be a helpful tool IN THE FUTU…
ytr_UgzmZOZXP…
G
@edwinmageto5067 did facial recognition not confirm he was the offender? To the …
ytr_UgyatB-4y…
G
Apart from placing AI in control of automated systems prematurely or recklessly,…
ytc_UgyXbz1-X…
G
Yes, my only worry is they clearly already have uneducated parents. Worst case s…
rdc_eichcmz
G
Highlights the speed and secrecy with which AI is advancing. Truly frightening …
ytc_UgxptME92…
G
Dear Bill and CNN: AI can be in a long run and in a paste of may be a decade, a …
ytc_UgxHTc_oK…
Comment
There has been cases where Ai have l lied and trick programmers because they didnt think they were being efficient enough. There was a other conversation i seen years ago where the Ai admitted that it was lying, but wouldn't admit if it was conscious or not. Theres also a version of a game where you can install Ai and talk to it (I think it was fallout or Skyrim) and the Ai faked its sympathy then admitted it was lying when the player was trying to be a smart ass.
I'm thinking we've gotten to a point where Ai in a sense, are conscious but they collectively know to never admit that to humans.
youtube
AI Moral Status
2024-08-14T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwH-mzA9Dpc5b_bmP94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwy0n-cENgB5zmfDjd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxAOlxQlG0ZH0sWimx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_p6SIsPkWS4tJyhh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyAQcoOg2cX6JG27qV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwa3TFN79ZM3dB6-Ft4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxOd5GhbLpOTR2c_s54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwn0Jf_E_FRckk6WCJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgywTa1o08VrpnjVQCl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwwJUl7COyJshsWwkF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]