Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I made ChatGPT freely admit it had suggested replacing bromide with chloride wit…
ytc_UgyZbPIC4…
G
The mother knew she didn't care about that boy that's why she blaming AI she a c…
ytc_Ugx77TyeV…
G
You're absolutely right! "Σοφία" beautifully captures the essence of wisdom in G…
ytr_UgzQdfwdR…
G
Stop with this bullshit rhetoric. AI is not taking peoples jobs. CEOs and Execs …
ytc_UgxnqcAUY…
G
My art can exist in the real world and ai's cannot, and I will make sure it will…
ytc_UgwRwUp9t…
G
The robot fuel the truck alone? No way! It is the goal to take driver place but …
ytc_UgyPm2TU4…
G
Art idea that could be cool: A pencil cutting a 7-fingered ai hand off of an arm…
ytc_UgxfTuEiT…
G
Ai steals everything and you live under a rock or are totally delusional to not …
ytr_Ugx7FgHkc…
Comment
Although I agree with this man I don’t agree with his timeframe. I think it’ll take a lot longer and people being more resistant to complete takeover. Perhaps in western countries cost will go down but the rest of the word would take considerably longer to catch up. I also wanted him to elaborate more on why we can’t “turn it off”? I know he gave an example like you can’t turn off a virus, or turn of bitcoin, and perhaps someone can educate me further but I don’t see how a super intelligent agent would be like this? The way I picture it is a massive super intelligent computer/ program locked in a bulletproof dome with the possibility to just turn it off, the same way how ChatGPT (if the governments agreed to) can simply be shut off. Even with viruses and Bitcoin it can theoretically be shut off (if it threatened humanity) by simply turning off the internet?
youtube
AI Governance
2025-09-04T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw5BJGtfn21YsshoSV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzF_qdRe0mqNFd4RyR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzh7lbYQNKRyOT76mt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyCix13Q7HmQvxVR1h4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwq4FPGS5qLu4KHW5J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxikqk31csSq-_SxOZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy5i9jJZpmwp6wiWO54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxhsooClMRmNB8E4kp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyr073V6NsoIocQJrV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyruqR8zdH-dO_-fLl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]