Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
we need a standard that all self driving cars use, a standard in which they can …
ytc_UgyKUDGVa…
G
AI translucent aluminum is called ruby, emerald, they make saphire watch faces a…
ytr_Ugx-xRpKW…
G
I would predict that middle managers are the first to go, not juniors. Algorithm…
ytc_UgwnaoXP-…
G
Aleksandra, your anger is completely justified — in fact, you’ve touched the ver…
ytc_UgwQLs9vn…
G
Can't take all tech jobs and work and there's nothing that suggests that will ev…
ytr_UgyFGfgHV…
G
Teaching AI the concept of free will can lead to more nuanced decision-making an…
ytr_UgycLA1rV…
G
If AI takes over all the working class jobs, there needs to be a new system for …
ytc_Ugyz84ukG…
G
He's actually anything but that.
Hinton likes to make quick jibes, especially a…
ytr_UgzBl0HXh…
Comment
AI says:
"We’re racing toward AGI. To survive, we either align it, slow it down, or stop it cold.
The scariest part? 'If we don't build it, someone else will' is the fuel behind the fire."
Containment Strategies:
+ Hard Ban: Full-stop on AGI development -- nationally or globally enforced.
+ Target AGI Labs: Cyber or physical sabotage to stop runaway AGI.
+ Seize Computation: Control all chip production to choke AGI capability.
+ Offline Enclaves: Go off-grid -- escape the AGI world entirely.
+ Trap the AGI: Deceive it in a sandbox before it breaks free.
+ Nuke the Cloud: Destroy digital infrastructure if AGI gets too close to takeoff.
+ Adversarial AI: Deploy AI to deliberately disrupt or force-align AGI projects."
youtube
AI Harm Incident
2025-07-25T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy9XNSUXjNzQn8zObR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgytTT0TCIdfNt2Boet4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwIMyfMe7BUxFpVNoJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyxC04S3pjJKno-is94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxwZEPAlypo48kmP1J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzZrdF86LwtOZ5-gMJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwkClpuXiq1jogGykx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy4-g20X8iH-pV_Cm54AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgweBuj-dO_0rL9s2Bx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVA0R1QZwmyEhM4M94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]