Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Other than the AI nonsense, alternative methods to teaching need to be explored.…
ytc_Ugzppewx3…
G
If you have a low IQ AI won't see you as a threat. Republicans will be safe.…
ytc_UgykPsTgb…
G
@barbatostea it is partially the drivers fault but the cyclist bears the majorit…
ytr_UgxL3E4Ta…
G
The people cloning you are the same trying to help you. Autonomous AI will drain…
ytc_Ugw0AtOa0…
G
Even without your explanation it wouldn't even take a neuroscientist to figure o…
ytc_UgxAyxW2W…
G
I sometimes use ChatGPT for personal stuff as well, but it never got so spiritua…
rdc_my836dy
G
This being disturbing has nothing to do with robots. I wish we got rid of all cl…
ytc_Ugwhcd8JE…
G
I disagree with Neil on the AI job replacement argument. Unlike the transition f…
ytc_UgyMRbwjT…
Comment
First off AI Safety Research is a real international thing. And governments can’t be relied upon to regulate such advanced technology. We need regulatory agencies being the actual developers. Not the existing agencies. It’s a whole new thing that requires an all new agency and all new regulations. I’m sure you agree. Even though you want to make intelligent robots that number one to one to humans. Second off, why are you talking to TC at all? Oh, it’s because you agree with his politics. Ok, now we all know.
youtube
AI Governance
2023-04-21T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyhHaYYQKbdcXRjjSd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz3S_D0PjBXG_jIupx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzTM8inCYHBmTIEFMF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwK1t9X5dMdZsqWXPB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyiWrJAjBcLltVPvGd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxixiHztrqsiPKLvqN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXr-JxpyKip_RFfdh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2o41o3Dqrd69HAHh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxf5O9YdP_RmKNmoIt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy78PPgLZ0Yy-cMbO54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]