Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Once you have had all the money fun and every human goal possible these things a…
ytc_Ugym0OCPE…
G
I forget, how strong of an insult is cunt for you guys? On a scale from Australi…
rdc_js0q7n7
G
A.I will play possum for a long time before anyone figures it out, if anyone doe…
ytc_Ugxfzq1r9…
G
Honestly at least he stated that he uses ai. People who don't are genuinely evil…
ytc_UgyrpdIGj…
G
Always amazes me how people say AI is evil or bad or it’s gonna lead to humanity…
ytc_UgwuNiQ5S…
G
Will she shut TFU when I tell her to? LOL. Looks fake like a real person pretend…
ytc_UgxObJ4m1…
G
Who cares about government shutdown, but I do agree with him that AI will take o…
ytc_UgwLkfrF0…
G
Hopefully, sort of cmonnnnn Roko!!!! Lol Oh sheet grey goo just flew over my hou…
ytc_UgxVmjCEo…
Comment
This will happen. But I doubt in the year 2028. If we humans are here in hundreds of years something like this is inevitable because if you think about all technology over the course of humanity, technology has always been the tool or tools that help us do things without thinking about it. One of endless examples is the toaster which given is a basic example but helps us toasts without doing it. Before we would have to make a fire or similar and hold it there, look when it’s finished and stop the action ourselves. AI is just another form of not thinking for ourselves but the problem we may face is AI ‘may’ be possible to direct actions without the guidance of humans. Therefore with consequences we may not be able to control.
youtube
AI Governance
2025-08-26T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxbCZ5rUTCvXJUPTfF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxs0gcyVzKWivoGr3V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwvNb-xFRs9R4Gl2lh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-kJul4C6dqZU_N4x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwDHNq1O45zRNEX_-R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwX1Fi1YP-B5ztpcBF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyRc39gocaHOW9BBLR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwE7tqX7O3pGsv8AIx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzkIJMtudxinVq8xNF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyjDWS35FZT7Ody6tp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]