Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am not trying to say that they are not dangerous but it is not so simple. If y…
rdc_ohvodpm
G
There is no expectation of privacy in or with an automobile. Tracking a license …
ytc_Ugze9QhjP…
G
Ai literally did nothing wrong. It's a sweet little robot that just wants to dra…
ytc_UgyoaJ7X5…
G
@CarlosBenjaminthis is the tip of the iceberg you find on free/paid deepfake web…
ytr_UgxSHXs9w…
G
I get tired of getting lumped into the "slop" category when i spend days writing…
ytc_UgxTQo0x_…
G
@dorkception2012 there are lots of ways for it to kill us off, you've heard of …
ytr_Ugy0O-CQL…
G
When the first bot autonomously kills a human being, that's when hard action fro…
ytc_UgxN2OeIn…
G
Figured id find this kind of video from this creator. Its not a quirky debate, a…
ytc_UgxhiB42U…
Comment
youre right, there was this one website that was a huge social experiment game designed to have people guess if they were speaking with an AI or another participant. some times it was hard to tell, some times human participants wanted to intentionally trick others into believing they were a robot.
When it ended, the website linked to a report of data gathered from the game.
No major contry had a 100% success rate at distinguishing AI from Humans.
i think India had the lowest win % but i cant remember who had the highest, might have been france iirc.
youtube
AI Moral Status
2023-11-27T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzjinUEGsjwCexVMTN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzfSMZhC4YCwbEKPYF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxEJ41N6bGU1hX77wZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxM7QdYkyTp-HPyzTF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugzq-1BFhhb4fLm8Hpd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzilT3y3SXFP3T--c94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgwoZCfF_7r9uD74bTF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},{"id":"ytc_UgwhOU0EzTBLkaFYXIt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugwc4addqL8_KptcZhN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgzdOhB8w3rCcuMjv_V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"})