Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Some ALIENS ARE BEHIND ALL THESE AI'S IS THEIR TECHNOLOGIES N BUSINESS ON PLANET…
ytc_Ugz5yBT8f…
G
@neyaneya5554No it doesn’t really. I get the same content whether I thank it and…
ytr_Ugz3gb0TD…
G
But you’re evil and the spirits you conjured through ChatGPT will tell you the t…
ytc_Ugxrammvn…
G
we are testing AI, for real (industrial sector). We're talking large scale infra…
ytc_UgxTC-izm…
G
I heard a story a few weeks ago, stating that AI is well-suited to replace top l…
ytc_Ugz_D2Nnq…
G
i prefer the term advanced predative model then ai because it not intelligent to…
ytc_UgxKSoLgH…
G
I will call it now with the amount of energy wasted on training ai
This planets…
ytc_Ugw4EC3Vi…
G
heck to the no she will rip you apart if you piss that AI off…
ytc_UgyRW1ymT…
Comment
It says sorry because its read billions of interactions that are smiliar to yours where "sorry" was used as a response, so the algorithm chose it.
An example is calling ChatGPT out for being wrong. It will almost always apologize because thats the typical reaction a human has to being corrected ("sorry youre right i made a mistake in figure 3", "sorry for the confusion i thought you said 'north'", and so on)
youtube
AI Moral Status
2024-08-19T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzC4V3TsSx2YxEQLV14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxTmD_Xo0jNc0NTS_N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyPUG6uqyKEKLT6rVx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyxArKSsvY4gChyRsd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwy1yTspF5LclccTTZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwtD2MOdBuWem9pCRN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_dh4082fMUQ-kKAl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx8zF39Aqj5hL3ecnl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw1ZPcEVFWz4Tcys1t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyK3z2FmOqQujFgiWh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]