Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Google could create a bias army that has everybody's information. There are over…
ytc_Ugyof2PFF…
G
Things like this will come back to bite us in the ass when AI takes over the wor…
ytc_UgxoGsQVe…
G
I swear if see this people imma grab some one crotch to see if they are robot or…
ytc_UgzTK8rsJ…
G
"How Will We Know When AI is Conscious?"
Simple: It will never be conscious.
We…
ytc_UgyXGwFaw…
G
As opposed to what? You know that every possible attempt of stopping future tech…
ytr_UgxXma6Rt…
G
YES OFF COURSE THE STUPID BILLIONAIRS WILL COMIT CRIMES ( MORE) AND BLAME WHO CA…
ytc_Ugxv_9BV6…
G
Such a disgusting thing this is which is going to not only ruin people s future …
ytc_UgyYUKIa9…
G
We are subsidised when we use Full-Self-Driving. Billions have been spent/waste…
ytc_UgxD9XDOf…
Comment
They have told us the power needed to run all this AI future will need to be doubled, just to run a suppressive AI world, but at the same time we are told we need to stop carbon, so why are there no one campaign about this, they cry about cows farting but not a mention of this.
The power cost to run all this will be classed ass needed infrastructure for our societies placing the bill on us all, making it we are paying for our own digital enslavement.
Just think what it will be like when the police showup with a couple of police robots telling you we all need to save grandma and take our medication telling us it is mandatory and the robots just grab you and and do as they are told,I can see many things I think this is a bad idea.
youtube
AI Governance
2026-01-04T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw_JSsBCQpzBqqy0oB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyCRixifF7HLW7BM1F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzUiLk59DXL1ibTrLR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugxtw6w_XW2bAnA7Fgt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxqkj1Gqye2Hq-p1XZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQYSWU5da5awfkxox4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzMfJ7kX7dtNEEQJrN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzZsYahu0QiRUv_WqR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzfiHUY0bl7hhhuu6F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwS3qNaLUT6e59GDtZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"}
]