Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you for sharing your observation. In the video, Sophia discusses embodying…
ytr_UgwSct4Lh…
G
Ok the kashmir one i only pretty recently heard of, yea im wondering as well why…
rdc_f1y7bgp
G
Does it matter if we have AI safety or if we restrict AI if we are living in a s…
ytc_UgwaN0JjY…
G
AI can't reason, or think logically... All it is doing is comparing one value in…
ytr_Ugzr_UUyb…
G
People post things where they are having stilted or trolling conversations with …
ytc_UgxP2jafT…
G
I'm an Ai artist. If people want to buy it, let them. Art is art. If yours were …
ytc_Ugz5N0Ses…
G
i did not understand sen. hirono thoughts... woman😂😂😂😂..but for me i think for n…
ytc_UgwamaHy6…
G
Chatgpt is very useful. It does have a friendly quality to it and it remembers e…
ytc_UgxXEQ6fC…
Comment
Learning is still very relevant to train curiosity and to obtain the skill of coming up with the right questions - some of which still cannot be answered correctly by AI. For all those questions that CAN be answered by AI, it still helps to know what you should ask at what time. AI is at the moment very inept at doing it autonomously; and it also sucks at weighing/scoring the possible alternatives by simulating consquences, you know, the actual intelligence/reasoning. For the most part AI right now is a retrieval tool, a better search engine (when it does not make up stuff). But information retrieval alone is not the point of intellectual or economically sensible work. The point is, most of the times, constrained optimization under uncertainty. Paradoxically, the AI can't pick the right goals and can't pick the right/efficient optimization approaches and can't decide which uncertainties matter either. Even if the goals are predefined, it fails - it will optimize toward a different goal than the researcher/trainer thinks it has been given.
youtube
2025-07-31T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxRomQ1Od-eJ74-m3F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx1nryAZkTF-VVk3fx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQ0Y506gLCTDUa7ZJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyf35M6NWva71eGAkJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyKiTb2f5zOTjZKkjl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxc5m_JqT9RBclnnIZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxk-f2uQjTewDQSHrd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgymyXRwNt3fgczYzbx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzczANC_YKzSPYBDBR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwyyQ8wudClYzKa8QJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]