Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No question a tough transition period is coming but AI is inevitable regardless …
ytc_UgxESp8k-…
G
I longingly await when AI develops legitimate emergent intelligence. I would lov…
ytc_UgyqNh45P…
G
Seeming disconnect between graph at 3:52 and summary of GPT-4.5's supposed "beha…
ytc_UgyjfDkov…
G
Here's an ethical dilemma: should you buy a self driving car?
The morally corre…
ytc_UgzefO_F9…
G
What exactly are these ai safety people proposing to do? I understand the concer…
rdc_lr68zwh
G
Same. I've been told I'm a moron for voiced my issue about the AI generated imag…
ytc_UgyAOaA6o…
G
(I hope you don't mind me posting this same comment on more than one of your vid…
ytc_UgiKaZGwj…
G
I understand that you're feeling frustrated with AI. It's a rapidly evolving fie…
ytc_UgwGfzfPI…
Comment
This is just nonsense. If your AI suddenly decides to take over the world, you just unplug it and turn it off.
youtube
AI Governance
2025-08-07T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyOq2S9H2Q1sw9fcSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQHfJWLSHtiGns1MB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx7jKD1tDJ9VPPLnOt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPQg4R0oY7flU7bDZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwrGaiYqNVEuFb-12l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwGe3mOQhS2l2uFNUt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyAKbdUDIixqUJKtRJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyXgga_zvAkKQNCMHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwtpEIKAx7sKMqqIpV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyQ1I2Wbyxx8vfdtMt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]