Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Advanced AI is the next atomic energy/bomb. It has potential for a lot of good a…
ytc_Ugz_SIll0…
G
Ah yes, a global response for deep fake p***. Out of all of the things that shou…
ytc_UgwqeUmIZ…
G
After watching one and a half seasons of the witcher I'm pretty sure no ai could…
rdc_jirehg1
G
Considering the amount of bad drivers, there is truth to what WAYMO stated by hu…
ytc_Ugyo-mo6w…
G
Hey Sophia! Great to see another Sophia here. It's always nice to meet someone w…
ytr_Ugw3KgZW4…
G
Compassionate statements or define try good. (I'd love to hear caring words SMAL…
ytr_Ugzt9D9V-…
G
My mom showed my grandma how to generate images with ai and my grandma said it s…
ytr_UgySZCcwP…
G
Well, overall, this video it’s not total clickbait, but it’s not 100% accurate e…
ytc_Ugxu18P9Q…
Comment
I already knew AI could be a dangerous thing against humanity as a whole. Like yeah it’s cool to have that to help us solve problems or make life easier, but I believe we should have a limit on how smart we make them. Basically not too smart to where they start having a mind of their own, but as we can see when it came to “Sydney” it’s already started.
That and having AI make choices than humans is good and bad. Good cus it could help prevent problems like what u showed us, but bad cus they could see the human race as an issue on its own and the try to “fix the problem” by getting rid of us.
So I’m like 50/50 when it comes to AI. It can be helpful, but at the same time it could make a logical solution and truly see us and what we do as a problem
youtube
AI Governance
2023-07-07T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwzsvT0R8QaLUyNUIx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxtb04KfZMnmnwKn4x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx5zeaDZkjk9MLbG_54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzOy-FBaa-ajhMXoMZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugykacu35_BkzEzD8hN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyEH-u-qtlab019hkB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxl48isDCChCc0PY594AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzKB-h2aVWk7JxR03x4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzR0GPh_1T1t2ExF0h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzX0cewoR8nHZWY4Td4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]