Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This stems from complete ignorance for what tools exist. Midjourney is an amazin…
ytr_UgwTzyjNN…
G
A few questions on the ground robot. Will it dig itself out of a collapsed build…
ytc_Ugyh1lwmn…
G
They're right. We don't want to build AI weapons that are made to keep functioni…
rdc_cti3u4r
G
Yeah 'Mr spent so long reading rather than living my neck is deformed' you said …
ytc_Ugw4Yplxd…
G
So google think it is possible to create a sentient AI but have policy against i…
ytc_UgzvzZrH3…
G
Hi me again wow this feels like a tradition now um still theatre kid now got tal…
ytc_UgwBr90X7…
G
AI helps me create wonderful things. I don't understand why it produces so much …
ytc_Ugzb46b4R…
G
Is the Lawyer that doesn't want this to happen with UBI cause they want control …
ytc_Ugw191dZq…
Comment
In summary, the risk is real but it's likely not so immediate that we cease AI development and short term gains for some unknown future date where it will harm us. But likely the harms will be gradual and impact folks at small scales as bad human actors use the tech to do harm before we get to the point where an AI super villain is created
youtube
AI Governance
2024-11-15T01:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwRvWP_k7v_jN9-Te14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyksdh6rn-4hBjfu214AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxlTd1d2AkohR8lVSZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxF1_HmuOODIl8KiOF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzTMs1seu-Hm2wg1tB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzck-R6lKxbvEb8M5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyyLzF6cJe301DdxjF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyL07Rq-EVfO1ActR94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz6Llf_yDF9Gc34V9B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzaIf0jFeodxvBJt2d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]