Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Drugs are much more dangerous than AI and AI is much more dangerous than nukes.
…
ytc_UgyoaZoDa…
G
I need to know whether A.I , if given a target, would care about the innocent li…
ytc_UgyoLy3FJ…
G
How is this a concentration camp and not just a regular prison? This is just som…
rdc_hlw810y
G
If all the people with a moral compass are being fired or resigning I don't much…
rdc_l5kz5sx
G
@Serbokrat nah not the website, amazon had physical in store locations like walm…
ytr_Ugy4lYi86…
G
I [[ *H A T E.* ]] AI MORE THAN I HATE [[Clown Around Town]]!! [[Free Sample]] A…
ytc_UgzHW8FaB…
G
lol you have AI wipe your prod env and then blame the AI? Give a monkey a gun mu…
ytc_UgyTU8FH7…
G
they are going to get the robots to rob the taxpayers so the robots can pay the …
ytc_UgzHcrFET…
Comment
I think one of the questions that needed to be asked is this: If you were to measure the power usage of a chatGPT server when it has NO questions coming in, does it have idle power usage?
Why does that matter? It matters because it answers one of the questions they were dancing around; DOES a large language model 'think' when it isn't being asked a question?
What a couple of people were trying to say, and doing so very poorly, is that it is only reactionary. It only 'thinks' when there is an input - a question.
It isn't sitting there contemplating life. When I finally learned that fact, I felt a lot better about the problem.
Does that remove all risks, especially during the training phases? No.
youtube
AI Governance
2026-03-25T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyY9ISvt41VWhB07sl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwmiuTE7f9CMltAYhh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzopRRhZ45NAVyQTCF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwMMRrux9L5OrnBLqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4yPrlGeN8KS4YT3l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzLpRaatw062W6gknx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx6crD-k1uUs8ySInh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx77922bRRhBEvNv9t4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwY_mpjJnb33DXS3WV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyLa5_YvVmKcWthrGR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]