Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is big tech propaganda to scare you into believing that corporate siloed AI…
ytc_UgzfyxpWE…
G
We just had a shooting in my town. Guy decapitated someone, stabbed someone else…
rdc_f8t2woz
G
Ai is definitely not going to do what I do that's for sure. Not even a robot.…
ytc_Ugy6bbq7S…
G
AI will become beneficial and safe right after the near death of Capitalism. Tha…
ytc_UgzmmBScd…
G
If you think about it, driverless AI is already safer than most humans. It is th…
ytc_UgxfqLCZ0…
G
I genuinely don’t see how digital art is “soulless” or “lazy”. When one makes a …
ytc_Ugy4xpuPi…
G
I will start worry about this, when chat AI's stop being so effing dumb. Current…
ytc_UgxdPX1JL…
G
No such thing as an AI artist, just lazy cloat chasers using stolen art to try a…
ytc_UgxYcfsus…
Comment
Lex: we'll just keep developing and testing AI until we see some bad characteristics and pause to see if we want to let the genie out the bottle...
Roman: if it's super intelligent, it'll probably keep deceiving us before realising that we've already let the genie out the bottle.
Lex: .....erm, we'll test harder...
youtube
2025-08-06T18:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwprATfFV36HDtMryd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzQQD1DH02Ch4ywd5F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyvSfnbJpdRu6ptCHR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx1ZTEOhLM3wtuZjAB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxkGC0CE_7Lt4DWmxR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxGDwPcMoRiQbeUhAd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQ9Db389WW2yzzBCF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgymQt_83X-2JdfliQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5b_1ODkaHnfvmbMJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwIxh9EARldj4G_Aep4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]