Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI art should be used as the AI telling you what to do, not you telling the AI w…
ytc_Ugy6SI3hu…
G
I am wondering if you went off grid living on a random piece of land and discinn…
ytc_UgzyI2jRE…
G
AI can be many things, but it lacks motivation, I wouldn't worry about pure AI t…
ytc_Ugx33hW5G…
G
Yeah what, it doesn't compute for me. He'd at the youngest started university at…
rdc_dv0llh3
G
Yeah, I was thinking the same. AI writes the script, then a human dots the i’s a…
rdc_jwv654s
G
It’s a REAL ID for gods sake! What you have to go through to prove who you are t…
ytc_UgyCr7xV-…
G
The AI Lawyer just standing there, smiling, taunting the Judge. She was NOT havi…
ytc_UgzXJtOIh…
G
Where I don't follow is if Facebook is pushing for AI agents, won't that be unat…
rdc_m5lz0tz
Comment
But aren't human thought processes also about predicting the best possible answer? I see your points, but the actual problem with LLMs is the missing reinforcement learning cycle. In general and domain specific. All the other pattern which you introduced would do a human in the same way for problem solving.
youtube
AI Jobs
2026-02-26T08:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw64E5T6ozVpi7J8YR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwt7XdP1CAkDUiHQld4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-CmmFfYk_s_x4U3Z4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzYZiO2ms1omdYdSUN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzv5PK9Jwz-XZ60wAx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyHqCNT1TtQZR17yn94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx9NVHmMBi48JPr9Xx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnHLyMh48vkLjdQ9Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyP3LTupCFGiXEuLl14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzKF2hpjjFcxka56pF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]