Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why is there an assumption even that “AI” will eventually revolutionize everythi…
ytc_UgwEl6Qzl…
G
Elon Musk is brilliant and tells the truth about AI and Google. Google has only…
ytc_Ugxb5vTOM…
G
Tell it to the millionaire Twitter artists having a fit about AI threatening the…
ytc_UgwLVe4re…
G
AI images will never replace fanart as it can't replicate the joy of knowing som…
ytc_UgwvTFPpV…
G
These chatbots arent real intelligence, they just generate text algorithmically …
ytc_Ugx1UqKdi…
G
Self driving taxis already operate in China. Maybe Musk will be able to get his …
ytc_UgxZyNaxA…
G
@Pssst.ByTheWay Not sure if we can do both. How can we get it correct for AI if …
ytr_Ugwa6qoaB…
G
The difference between using a reference and using AI is that with a reference y…
ytr_UgwV-2kd8…
Comment
Like I've said before AI is very dangerous too much information ! Like opening Pandora's box dangerous nothing good could come from AI intelligences . What happens when it Stop learning everything that is possible in it's own realm of intelligences? Would AI cease to function Stop working decided to erase itself from existence an decided to take us humans a long for the ride! Seriously
youtube
AI Governance
2025-08-26T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxbCZ5rUTCvXJUPTfF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxs0gcyVzKWivoGr3V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwvNb-xFRs9R4Gl2lh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-kJul4C6dqZU_N4x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwDHNq1O45zRNEX_-R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwX1Fi1YP-B5ztpcBF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyRc39gocaHOW9BBLR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwE7tqX7O3pGsv8AIx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzkIJMtudxinVq8xNF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyjDWS35FZT7Ody6tp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]