Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"use Ai to create their own problems and then use Ai to test the results"🎉🎉🎉…
ytc_UgyPjjJMN…
G
@xjakanton That is true, being that it's not as explored so I can see where your…
ytr_UgyQmg4yK…
G
Hello there! We appreciate your engagement with the video. Rest assured, our AI …
ytr_UgwEz0K0_…
G
This show goes from….AI is dumb and an all slop..to AI is so capable that it’s g…
ytc_Ugz-qSfAX…
G
I wish AI would do the things we don't wanna do. Like do my taxes so I can go dr…
ytc_Ugyry9I9C…
G
The ruling in the Williams vs Trump (2045) case that ChatGPT taught me about say…
ytc_UgwTyTnkr…
G
Has an LLM ever been trained on Hebrew or ancient Greek or Aramaic? What happen…
ytc_UgzPjuejR…
G
This is like Astrology. The replies are so short you can read anything into them…
ytc_Ugx9sX2if…
Comment
I know the argument about "pulling the plug" is mocked but I still want to ask the question in a more specific way. Hypothetically, if AI became a global threat - meaning every single government saw it like a humanity-ending asteroid, are we saying that we can't just shut down every power sucking data center we're building at once? I mean, I don't understand how both things can be true: AI needs huge compute AND there is no way to shut down that compute completely.
youtube
AI Governance
2025-09-04T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyDiuOq5B1QJ-YxLrV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyAGMneUshFTs7YRwx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw8MsFcYpiM25vCDsV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwazKOL5tDlGZx23AV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyqJgx6dbnAtqoryhV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx83_GhICXHEFViYN54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwaUtSM3vxU8QXOmWx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxaS7fiAd97mWOnksF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwTnEBpbv2dauWNNT94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzjDLdv3bPRGapFQp54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]