Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's fascinating how AI can sometimes convey emotions unintentionally! Sophia's …
ytr_UgwNJVgnH…
G
Because even if we all learn how to use Ai there won’t be enough jobs for everyo…
ytr_UgxmTbPs4…
G
We can expect a rise in applications for trade schools. Not like AI can build ho…
ytc_UgzK6WIC3…
G
This Worst of Job are today it so ever, but I going most wanted AI…
ytc_UgzY3CYhO…
G
Today in class our teachers along with students had some discussion related to C…
ytc_Ugy-hFLTF…
G
Creepy!! The Japanese are working on a robot to carry a human fetus, and expel i…
ytc_UgwoSO9A6…
G
It was a custom jailbroken AI. We actually cover how the process is done in our…
ytr_Ugwj08_xe…
G
We're as close to AI as the Egyptians were to the car when they started to use t…
ytc_UgznjKTlY…
Comment
Both are wrong, and moreover, overly confident in their views. Wolfram does not understand the nature of human intelligence, conflating with it prediction power (rather than the capacity to compose a Beethoven Sonata). And Yudkovsky attributes emminent emergent intentionality to AGI algorithms (which are merely advanced statistical pattern recognition devices), partly since we don't understand their inner workings.
youtube
AI Governance
2024-11-15T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwRvWP_k7v_jN9-Te14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyksdh6rn-4hBjfu214AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxlTd1d2AkohR8lVSZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxF1_HmuOODIl8KiOF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzTMs1seu-Hm2wg1tB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzck-R6lKxbvEb8M5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyyLzF6cJe301DdxjF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyL07Rq-EVfO1ActR94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz6Llf_yDF9Gc34V9B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzaIf0jFeodxvBJt2d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]