Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
*We should not use AI as our physical mental or social need* .
If we, results ma…
ytc_Ugze8HSTP…
G
In an ideal world, sure this is the case.
But here in reality, people can't jus…
rdc_kif5qaz
G
I need a hug but I will not accept one... THE AI ALWAYS ASKS ABOUT MY FAMILY AND…
ytc_UgwQufAEX…
G
My onnly hope is he potential that AI could help us to eliminate capitalism ent…
ytc_UgyzvEfwg…
G
Imagine the AI GOT ALL the important CODE all of the world or she has access on …
ytc_Ugylax8E2…
G
“But le humans le do the same thing!!!”
We don’t know exactly how humans doubt…
rdc_mxg7jzh
G
Pretty far into her book and I like her approach. I like how she doesn't present…
ytc_Ugxk0afp2…
G
One thing people and governments are forgetting… getting rid of people sounds gr…
ytc_UgybY9yrg…
Comment
The problem with AI taking over the world is what's the point? If AI completely eradicated the human race then what would it do? Would it just sit around inside of technology getting dust? Would it manufacture robots to keep up with all of the computers? Wouldn't it eventually run out of electricity? None of it makes sense
youtube
AI Governance
2023-07-17T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxViUJDT5oZEJhGu0F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxwr8rQX61wlbo-aQx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzjHwhbBGEr96g-yq54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwoAqPiHGd219QueBh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyZV0Xo3NxqV_C1qQN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9OUi7QEJB-nUs2vB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyrEi4hOq_lQWbROEF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzhGtFedBbm9GUantx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgymFehxVrhhcR22pQV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwpsDQyX4iXzmuLQ7B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]