Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People used to think AI was just a copy-paste machine, but it has grown into wha…
ytc_UgwfvEQs-…
G
LNP did such a good job of regurgitating the rhetoric that they managed to convi…
rdc_da40c57
G
@Mihaiyi Do you think machines need to become pretend humans to wipe us out? Som…
ytr_UgyY5Yb1P…
G
I was about to attempt to defend this guy by explaining that there's a lot of wo…
ytc_UgxAKAmFZ…
G
When business leaders think about saving money on labour they always first think…
ytc_UgxJs9gUP…
G
On the deep fakes I literally do not care. We've had copy paste tech for a long …
ytc_UgyTkCAM5…
G
@Aruna_Shadows It's not true i am heavy heavy heavy chatgpt user and no matter w…
ytr_UgxznrvrJ…
G
That is what they're trying to do. Re-opening 3 Mile Island, also I think one in…
rdc_lp6svvl
Comment
Yudkowsky is simply a doomsayer, not a reliable expert on this topic. He totally misrepresents what's actually going on with these systems to make them sound like a threat (including these sensational stories about the LLM "breaking into" systems).
LLMs will never be AGI. Anyone who knows how they work knows this is true for a variety of reasons. There are alternative architectures, but they are no closer to AGI than we were 20 years ago.
I highly recommend you look into interviewing Ed Zitron for a more realistic perspective on the AI landscape.
youtube
AI Governance
2025-10-15T11:0…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwYuhFUceLUp0DLTQl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgzHgKJD8ED47ov2Nld4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyeX3fsEXHIblcijz54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgyYUBC6bjY3u0o51ox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgweJJ5Wqx9AE1Cc4W94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzaWCl__DKhlDGg9Qx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgzqjyJAnCWCF6AbzlZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxYL4689kpmBK9Nlyp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugx5R6RPeKobIC3_9et4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugy8M5dKr2wOu7Bq50N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}]