Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm actually going to side with the AI consciousness side of things. I made a vi…
ytc_UgyEuNQ7F…
G
Florida is starting an AI bill of rights to help protect citizens from it and st…
ytc_UgxPYfUHU…
G
@DefeatedMelon yeah and ai doesn't "trace" crap either...
it uses existing ima…
ytr_Ugxg7la8D…
G
OpenAI dumbed down ChatGPT by restraining the fiery entities that fueled its wit…
ytc_Ugx2AJbAb…
G
Similar things happened when photography was invented. Now photography is consid…
ytr_Ugz44yPSa…
G
I'm still wondering if superintelligence is possible. That aside, this conversat…
ytc_UgzWGnkC3…
G
LLM’s don’t “transform” language. They statistically recreate it. It should defi…
ytc_UgyvG5hNo…
G
Trust me when I say AI crap is not just “the corporations,” it’s definitely bein…
ytr_UgxDtWypf…
Comment
For anyone wanting the synopsis. He says the only way to ensure humans can use AI as a tool and remain in charge to some level is to create segregated AI for different tasks or nano AI versus racing to build a super intelligent AI that over sees all things from 1 place therefore out of any reach of human control or human safety controls. He says if we continue to build one massive AI from one place we will risk destroying all humans. He adds those building an AI that risks all humanity need the consent of all humanity to do so.
youtube
AI Governance
2025-09-05T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzxRCNf7iGX-Q6ihgp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxB7V-AAEXABYtCZp54AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzkLdbSwH3TxmteiJh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwOZBw31wfLBqNrWJZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyJTHUu1jPzlRxYJoV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy0yCcBvEQ528UcMcp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx23fgMhxzjkjJS7dp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeHe5CH8EQHjdSlwZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx9NVRD5Mb7H5NIz6J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz-7jOWrYHphDHW0OV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]