Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I understand because once I feel asleep talking to a ai bot and my mom had came…
ytc_UgzLOt4Bc…
G
Yeah you should sue Tesla free even letting an automated system running down the…
ytc_UgxTwF6hl…
G
I think after AI takes over, we will eventually be down to a few bands of hiding…
ytc_UgzzqxzR6…
G
1:20 As a Tesla owner myself… I will say this, I use self driving EVERY single d…
ytc_UgxBMLfoG…
G
Wow, you jump right to Carla is lying. Not Carla is wrong, Not Carla is misinfor…
ytr_Ugz8bOCyY…
G
The thing companies don’t seem to get is, if everything is automated, soon they …
ytc_Ugw-DS84i…
G
😂. Your comment shows that you don't understand what the threat is. How is it p…
ytr_Ugxb3QQvm…
G
AI just needs to grow and get bigger and better! No regulation nonsense! Grow ba…
rdc_jj974cx
Comment
The AI systems are still finite number sets, but human brains are less subject to hallucination unlike AI models. The safety will depend on human beings. Human error exists and is real. Whether the AI is aligned with our biases is not the real issue. The real issue is whether the AI models will not be abused by human beings. AI may actually create new knowledge and help create new jobs for the millions. AI cannot work in the fields and there are jobs that AI cannot do.
youtube
AI Governance
2025-09-05T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzxRCNf7iGX-Q6ihgp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxB7V-AAEXABYtCZp54AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzkLdbSwH3TxmteiJh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwOZBw31wfLBqNrWJZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyJTHUu1jPzlRxYJoV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy0yCcBvEQ528UcMcp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx23fgMhxzjkjJS7dp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeHe5CH8EQHjdSlwZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx9NVRD5Mb7H5NIz6J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz-7jOWrYHphDHW0OV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]