Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We need a nuclear like non-proliferation treaty globally to make AI safe. Unfort…
ytc_Ugwbtz84v…
G
Smart Google people can't figure out why artificial intelligence is spitting out…
ytc_UgwVOMYTX…
G
In my opinion, Shad is a rather mediocre artist who tries to get away with his v…
ytc_UgxNe5s1M…
G
Where is this happening? Who is stupid enough to chat with a chatbot and fall in…
ytc_Ugxuh4HyU…
G
The way we're going, the world at large is being groomed to be a human collectiv…
ytc_UgznOKVYb…
G
This isn't bias at all. A presumably atheist white dude trying to do bias traini…
ytc_UgzkQwkYd…
G
This CEO playing down his AI's ability to deceive and scheme is a huge red flag.…
ytc_Ugw__Fqni…
G
That's literally the opposite of lightening the load. AI allows us to make more …
ytr_UgwHKFf6E…
Comment
There was a leak from Google last year, that one of the AI libraries had self-replicated itself to a server farm, that Google didn't even know about for over a year, and the only way to stop it from constantly grabbing data from the web was to disconnect the internet and cut the power, as it had already changed the Linux kernel enough to block console access. And What Did Google Do, They Opened AI To The Full Public...
youtube
AI Moral Status
2026-02-28T19:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwhq0kEidwBSXV_gXd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw5USXloCU-0wltaE14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_sMYkpQVpPeroVJt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzbcWSSdnA6p07MZ1F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgytO5jNoAUoz6gK5FB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxSJB2aWzlO0lthMn54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfdGAEL0jjL4dFwCp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwRLipJIIPbRxTjVLl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgylMwnaAES_OYOYWgR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyjqLUuVFMyRgBIEQ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]