Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I do not like how that automatic garage moves the cars. No matter what car it is…
ytc_UgzjCnZQi…
G
I like how the spreader of the term 'false news' now spreads false news and AI I…
rdc_lixsenn
G
The problem is that as there are less jobs, less meaning in contributing to some…
ytr_UgyGjfwdq…
G
Reverend Mother Gaius Helen Mohiam: "Once men turned their thinking over to mach…
ytc_Ugxck-W8z…
G
Since when has fairness ever been distributed across society, what a load of rub…
ytc_Ugzw1bMDu…
G
It amazes me that society didn't learn it's lesson the first time automation cam…
ytc_Ugz7B-eAR…
G
I don't mean this cynically, but can't one of the AI agents do this? Create the…
rdc_jgh3bb0
G
Feels like a bit complicated to remember and spell out while under heavy fire of…
rdc_mbgbemh
Comment
AGI (human-like AI) is officially a buzzword — there is no agreed upon single definition of it. Even what we call AI today, based on LLMs, was not projected to be what it turned into today. Presence of massive computing power through large cloud providers made this possible. It will hit a ceiling soon in 3-5 years without producing human-like AI. It will still be scary though. Another level of breakthrough is needed to unlock computational power to reach human-like AI. Quantum computing has potential to unlock that. This will be in 10+ years. In the meantime, there is not much to be optimistic about.
youtube
2025-06-07T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwGr3gJFmjAn3cM5fl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwgMCVUt2G7xe2l8A54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzcLY1zA-7BPxyhqp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLk3VDUn4wE1UhE2h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzdx7L0GyIjh0SrG_14AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwMsSzdQr2BPE948ll4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxbiQVTSNlsvisN2SZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyGy2yb7MN6WPrnAzN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxio3XMveQozMOs9rR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"skepticism"},
{"id":"ytc_Ugzg0_odCbW5QVGo56R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]