Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I thought UK legislation required explicit consent or substantial public interes…
rdc_l65ft4s
G
AI 'artists' are just artists but they have no friends, sit alone in a basement …
ytc_UgyssoQTe…
G
I mean, the ai artist did say the art was ai (EVERYONE WHO RE-DID THE ART IN THE…
ytc_UgwfCoTdN…
G
I feel like I'm watching a conspiracy reel. .... if it's doing other more compli…
ytc_Ugy-YPCOC…
G
Tbf I get you but I’m sure the mum didn’t even know to look out for this type of…
ytr_Ugyf5gvk_…
G
It is my understanding that modern society has never achieved herd immunity with…
rdc_g9tjv9j
G
A simple automation setup would be far faster and cheaper, why build a robot to …
ytc_UgwfgqK8y…
G
True agentic and autonomous AGI, ushered in by the human intelligence Super Evo…
ytc_UgwCfpQtV…
Comment
This is incorrect. LLMs empirically contain abstract models of the world. See for example "OthelloGPT" and "Language Models Represent Space and Time" for old examples. There are many more since.
It is very, very tempting to dismiss AI as a party trick, and it is difficult if not impossible to actually hear the world's leading experts when they try to explain what is actually going on with these systems, which they themselves do not fully understand. Simple explanations are satisfying and comfortable, and phrases like "true intelligence" and "true understanding" are cheap dismissals, but if you ever try to define those terms and then actually test humans and AI, you will find that in almost all ways where humans meet the definition, LLMs do too.
This is weird. This is potentially extremely bad. Dismissing it will not make it go away. We are going to actually have to work together. Very hard to purposefully make it go away.
youtube
AI Moral Status
2025-10-30T21:2…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxKCgf8JuOU9WYeCuB4AaABAg.AOv6yCr_DsmAOv6yCr_DsmAOvCDNNIAt8","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxKCgf8JuOU9WYeCuB4AaABAg.AOv6yCr_DsmAOvft0o88L1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxKCgf8JuOU9WYeCuB4AaABAg.AOv6yCr_DsmAOvvqRKIb88","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxKCgf8JuOU9WYeCuB4AaABAg.AOv6yCr_DsmAOw4hEMxKpk","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgynyV7z5IzzOWY-2Vl4AaABAg.AOv6jR5_wzzAOvCOHBfGVN","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgynyV7z5IzzOWY-2Vl4AaABAg.AOv6jR5_wzzAOvEAo82QOs","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgynyV7z5IzzOWY-2Vl4AaABAg.AOv6jR5_wzzAOvF4Yk3y-h","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxHRCQZlgXI7lMyl8d4AaABAg.AOv6YwKNBI7AOv8niKyUkf","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxHRCQZlgXI7lMyl8d4AaABAg.AOv6YwKNBI7AOvTaQF6c3K","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgwaetNvyKGmHPkYVDN4AaABAg.AOv6TOpprEAAOvPGTkeZrD","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]